Is imagination necessary for operant learning? And if so are bacteria imagining!
This question came from an interesting discussion I was recently having on selfawarepattern’s blog post regarding Consciousness and Panpsychism. The author says:
“..Imagination is the fourth layer. It includes simulations of various sensory and action scenarios, including past or future ones. Imagination seems necessary for operant learning.. “
After several replies, I thought it would be a good idea to present this as a separate post here. To be fair the author only extends imagination to all vertebrates having the ability to sense at a distance. But can we take it a few steps further than that?
When talking about classical examples of Operant conditioning, we usually refer to the Skinner Box experiments:
In this experiment, the rats bar pressing behavior is the ‘operant’. The consequence of which is a food pellet (positive reward) and this acts as a ‘reinforcer’ for the preceding behavior. If the reward is given every time the bar is pressed (called continuous reinforcement) then learning is taking place based solely on the behavior (operant) and its consequences (reinforcer). This is not based on imagination but only on actions (behavior) and reactions (consequences).
A good explanation can be found here: https://www.scholarpedia.org/article/Operant_conditioning
One of the commentators (Paultorek) argues:
“…research has analyzed the brain activity of rodents trained in such tasks, and finds that when they are (by the above hypothesis) anticipating future results, memories of the past experiences are being activated… “
However I argue such behavior is not limited to humans and vertebrates, but almost all organisms including protozoan and bacteria. The only conditions are the ability to change the environment and having a goal, which for the bacteria can be only brute survival.
Referring to brain activity analyzed in rodents during such behaviors, the biggest issue is that their brains are not the same as ours, so how do we know they are imaging like we are?
In the general sense processing of such learned behavior happens in the bacteria, the rodent and in humans. The processing in bacteria is simpler than the rodent. The rodents processing is simpler than the humans. But they all occur using chemical processes.
So if we can extend the courtesy of imagination to rodents why not extend it to bacteria as well? My opinion is that we cannot extend this courtesy at all!
Take Gambling for example:
Gambling machines are a good example of exploiting operant conditioning in humans. When the gambler’s gambling activity leads to the occasional reward, the gambling activity is reinforced. Yes, one could say that the gambler can imagine getting a reward but that’s not what’s driving his behavior. It is the reinforcement that drives the behavior and imagination is entirely separate from this contingency.
This is because the gambler can also imagine NOT getting the reward which is in truth the most likely outcome that he is suffering. Such imagination however usually does not reduce his gambling behavior.
The pull of gambling (using operant conditioning) is opposite and it is an uphill battle to resist this. This can go to the extent of becoming a disease, now formally recognized in the DSM 5 as ‘Gambling disorder’.
So as far as operant conditioning goes there is no role of imagined outcomes, only of outcomes. Any imagination that happens is separate from this contingency.
- Huitt, W., & Hummel, J. (1997). An introduction to operant (instrumental) conditioning. Educational Psychology Interactive. Valdosta, GA: Valdosta State University. Retrieved from, https://www.edpsycinteractive.org/topics/behsys/operant.html
- C.F. Lowe (1985). Behaviour Analysis and Contemporary Psychology. Retrieved from https://books.google.co.uk
Fizan, do you have a link to studies showing operant conditioning in bacteria? I’d be interested to read about the details. The simplest organism I had previously read about showing operant conditioning were c-elegans worms, but I haven’t been able to get the details on the specific experiment that showed it.
In their book, ‘The Ancient Origins of Consciousness’, Todd Feinberg and Jon Mallatt assert that operant learning, sometimes known as intrumental learning, has only been demonstrated in some animal phyla, and posit it as one of their criteria for sensory or primary consciousness. My statements about operant learning were made from that perspective. But if their views are mistaken, I’d like to know.
Thanks for the comment Mike.
I have added the refernces to the end of the post now (after my spam blocker blocked myself when I tried to put those links in this comment!).
The first two suggest that recent studies show Operant behaviour in bacteria (if not discriminant learning). The 3rd one is a book (freely available on google books) and talks a little about opernat behaviour in simple organisms from page 85 onwards. The last 2 are more interesting (unfortunately behind pay walls!) as such studies are starting to suggest there is animal like learning in single celled organisms such as Paramecia.
I think it is easy to grasp this concept on it’s own as well. Because how else would microorganisms move about in a goal-directed manner? By trial and error, by random behaviour coupled with repeating what works and not repeating what doesn’t (especially avoiding what harms).
Fizan, I appreciate the references. Perusing them, I’m a bit disappointed in the first two. The reference to bacterial operant conditioning in the first seemed a bit offhand and didn’t cite anything. I actually couldn’t find any reference to it in the second reference. (I searched on “bact”, “cell”, and “param” without hits.)
But more broadly, based on the heading in the section of the book, I think we have to make an important distinction here, and maybe it’s one that’s not adequately addressed in the literature. An organism can have operant behavior without any comprehension (or somewhat related to our other discussion, what we interpret as operant behavior) which can be modified by various stimuli. That could be considered conditioning of that operant behavior.
But my understanding of the term “operant learning” specifically, at least in regards to how Feinberg and Mallatt use it, is learning that requires that the organism be able to understand a consequence. They describe rats that, after being decerebrated (have their cerebrum removed), lose any capacity for operant learning, although they retain the ability for classical Pavlovian style conditioning. (Kindle location 3589 in their book.)
Not sure what to make of those paramecium papers. I’ve heard about paramecium before, but this was the first time I’d perused scientific papers. (Thank you!). I’m not clear on the relationship between discrimination learning and operant learning. But paramecium are pretty unusual for single celled organisms.
All of which is to say, you (and another commenter) have succeeded in reducing my certainty about how useful operant learning is as a measure of cognition. It seems like operant behavior can be conditioned without comprehension. I need to dig further into whether there’s any behavioral difference between uncomprehending operant conditioning and the operant learning F&M discuss. It’s hard for me to imagine they didn’t they would have gotten such a basic concept wrong.
I haven’t read Feinberg’s and Mallatt’s books, and I am sure they know what they are talking about. I may possibly be off the mark. But from how I see it taking a very crude example (which I may be getting wrong) taking humans brains:
When a person wants to inject amphetamine, cortical neurons fire off, association happens between all the relevant neurons required to perform this complex behavior (spatial location neurons in the hippocampus and cortical visual cortex are examples). This is motor coordinated at the basal ganglia and cerebellum. These go on to relay down the spinal cord to motor neurons from where the neurons relay to muscle fibers. At the same time modulation and fine tuning is going on via feedback from the motor movements, sensory and proprioceptive neurons. The modulation is coordinated at the basal ganglia and cerebellar level. This all happens dynamically and leads to a coordinated behavior which accomplishes the task of injecting the amphetamine.
In doing so not all the brain was used but only a specific set of neurons and their connections, in (more or less) a specific sequence. When the consequence of this action is nothing (for example they injected it in the air by mistake) these involved neurons and their connections are not reinforced. When they successfully inject, the amphetamine causes a release of dopamine in the reward pathway. When this happens all the specific neurons that were involved and their connections get reinforced.
The reward pathway has been shown to be activated whenever we get a sense of pleasure and is associated with a rush of dopamine (I took amphetamine as an example because it itself is a dopamine agonist). Because the involved neuronal pathways that lead to the reward are reinforced more than those that do not. The direction of activity becomes tailored and predominated towards this behavior.
I see this as operant conditioning or learning. In this whole process, I do not see much of a role for imagination. That is not to say going through this doesn’t have with it a feeling of anticipation or even imagination.
This in a way is similar to the paramecium that sucked up into a small capillary tube eventually took less and less time to escape from it. Because each time all the previous behavior that were fruitful remained and those that weren’t were discarded, overall increasing the efficiency. If certain behaviors remain and others don’t then that is operant learning (since the paramecium has learned which behaviors are fruitful).
In humans, with time the involvement of the cortical neurons become less and less and much of the role of enacting the behavior is taken over by the basal ganglia (limbic) processing. In my opinion, when the processing is in the cortical areas there can be a greater association and influence of imagination and of reflection on this behavior. But as this is potentially a positive feedback loop the processing becomes more and more implicit and automatic with less room for reflection or imagination. This is why addicts describe a craving to seek drugs which seems out of their control.
I think a lot depends here on whether the learning is reflexive or nonreflexive. Reflexive is going to be pretty much straight basal ganglia / limbic processing. For non-reflexive, we need to involve the prefrontal cortex and at least a smidgen of imagination.
Looking more closely at Feinberg and Mallatt’s material on this, they specify that their criteria is:
“Learning a global, nonreflexive operant response based upon a valenced result”
Feinberg, Todd E.. The Ancient Origins of Consciousness: How the Brain Created Experience (MIT Press) (Kindle Locations 3664-3665). The MIT Press. Kindle Edition.
It’s not clear if the “nonreflexive” label is meant synonymously or as a qualifier here. Looking at their cited papers, at least one seems to mean it synonymously.
“Non‑reflexive (operant) measures: Measures of behaviours that require spinal‑cerebrospinal integration, which are lost after decerebration. The use of operant measures specifically requires a learned, motivated behaviour that terminates exposure to the noxious stimulus.”
Jeffrey S. Mogil (2009). Animal models of pain:progress and challenges
So, I’m not clear if this is talking about all operant learning which is all nonreflexive, or if they’re talking about a subset of operant learning that is nonreflexive. In any case, from now on when talking about this, I’m going to say “nonreflexive operant learning”, just to be safe.
I think we may be converging on the idea now. I agree that in animals and humans Operant learning starts off with involvement of higher cortical areas and in this state it (most likely) is not reflexive.
What I am more concerned with is whether the general paradigm of operant learning needs these higher cortical areas at all. Yes in their quoted study it appears that animals require the cerebrum to demonstrate the learned response (?and perhaps even to learn it in the first place).
However, could it be that this is how it is done in animals (who have these structures already available). Why wouldn’t they utilize their available structures? Hence, when these structures are removed they are unable to demonstrate this learning.
Yet the paradigm of operant learning may still be possible (for example in Paramecia) in organisms which don’t have these structures. If that is the case then higher areas of brain are not essential for operant learning. So these structure would unlikely have evolved to bring about operant learning. They could evolve to enhance and make it more efficient perhaps. (? Or they may evolve from other directions and then be utilized for this purpose as well).
Hi Fizan and Mike,
Under “Paramecium” they have a “learning” heading in Wikipedia and it says this: “The question of whether paramecia exhibit learning has been the object of a great deal of experimentation, yielding equivocal results. However, a study published in 2006 seems to show that Paramecium caudatum may be trained, through the application of a 6.5 volt electric current, to discriminate between brightness levels. This experiment has been cited as a possible instance of cell memory, or epigenetic learning in organisms with no nervous system.”
Even if so however, I have a different take. It seems to me that Operant and Classical learning were developed so that we might understand ourselves, not how more basic life works. Note that operant conditioning was set up to describe something like how a young girl might learn to open a box of treats (reward), or to not touch a hot stove (punishment). And then classical conditioning would rather address how she might do things automatically such as tremble when afraid. Relating our conceptions of a full human back to a basic form of life thus seems a bit ridiculous, though I don’t doubt that it’s done since Skinner and the rest got us started at the top so that we could only then work backwards down to other life. Why not go from basic life up to us? That’s my approach.
Mike has recently been kind enough to formally consider my own scheme. We begin by distinguishing between mechanical function from computational function, with the computation being algorithmic. The first computations seem to have occurred through genetic material, with the second through central organism processors. I theorize the vast majority of these processors to not be conscious, with conscious processing even for the human to be less than one thousandth of a percent of the total.
So beyond its genetic material, the paramecium has no central processor, which is to say no non-conscious mind, let alone a conscious variety. I don’t doubt that its behavior could be interpreted as “operant”, though only because a bottom up scale such as my own doesn’t yet exist in science.
Thanks Eric for sharing that Wikipedia article. Considering your approach, I agree with it mostly, that simple organisms don’t have much of a processing ability. However, I am not sure if we should call it a nonconscious mind, why not just call it processing instead?
With regards to the bottom up approach, I wonder if it is even possible to start from a point that is not us. Secondly, if the behavior demonstrated by other organisms isn’t “operant” then what else might we call it?
And if you consider (don’t know if you do) that other animals with ‘brains’ have operant behavior/learning then where do you draw the cutoff between those animals that don’t have operant behavior/learning and those that do.
In my opinion, we can make claims of generalizing principles between things (including humans) such as operant learning (as these are abstractions that help us learn). But it is a step detached to then go on to use that principle to impose our own subjectiveness onto other beings which are remarkably different from us.
You’ve given me some great questions and observations here, and I’ll now attempt to not come off as a crackpot given how radical my ideas happen to be. I believe that I’ve developed some extremely useful models for mind (which is to say, computation), non-conscious mind, and conscious mind. The best I can probably hope to do here is interest you or some of your readers, but I’m good with that. There’s quite a bit to the whole thing, and communication isn’t simple since there are certain ways in which I fight conventional thought. Here’s a rough outline:
Clearly the mechanical typewriter functions mechanically, while a modern word processing computer functions computationally. Note that mechanics provide a relatively fixed type of system, while computation is a potentially far more dynamic process by which input is algorithmically processed for associated output. The first computers on Earth that I know of came about through the genetic material associated with life. Beyond this there are central organism processors (“brains”) in life which harbor nervous systems. Then finally there are the computers that we build. I’ll also say that groups of computers, such as Dr. Ben-Jacob’s bacteria samples, as well as a crowd of rioting people, can function as a larger form of computer between them. (In my own writings I like to distinguish mechanical processes from “mental”, though I’ll settle for “computational” here since this should be less confusing.)
I’ve been defaulting to Mike’s December post to explain how central nervous system computing first came about, possibly inciting the Cambrian explosion. Once nerves did not simply connect one form of input to one form of output, but rather came together in a single place, they could be factored together algorithmically for associated output. (https://selfawarepatterns.com/2016/12/04/is-consciousness-a-simulation-engine-a-prediction-machine/) After that there should have been all sorts of non-conscious computers running through their processing of input for output, but apparently they ran up against a basic difficulty that required consciousness. (Perhaps some day our idiot robots will become advanced enough to hit this same boundary as well.)
I suspect that consciousness was then required because in a complex environment, evolution couldn’t write effective enough general instructions to deal with various specific situations. Apparently it got around this by building a kind of computer that functions on the basis of feeling good and bad, and thus a new entity which has personal interests in figuring out what to do emerged. This might be thought of as a computer which has autonomy, or thus facilitates teleology. Here evolution wouldnt need to deal with what it couldn’t effectively program, but rather could leave such decisions to “selves” that were punished and rewarded based upon the circumstances of their existence. Below I’ve (hopefully) provided a schematic diagram of how I theorize human computation to function.
As for your questions:
—Yes fortunately I am able to do without using the “mind” term, falling back to the more general “computation” term.
—It does seem possible to begin with something that isn’t us, as physicists and others display with their models, though I think I understand your meaning. As in the post where you discussed panexperimentalism, I suspect that epistemology is your concern. Cheers to that! I sometimes think that someone other than myself will need to straighten this field out in order for the rest of my ideas to potentially be understood.
—Instead of “operant” I’m in favor of using the term “conscious”. Of course to support this we’d need a functional model of consciousness, whether mine or others.
—The conscious can be distinguished from the non-conscious by means of any of the conscious inputs, processor, or output that I provided in my diagram above, though I consider the “affect” input that constitutes punishment/reward to be the key. If an ant cannot feel good or bad then it should essentially be like one of our robots (but far more advanced). From my definition I actually suspect that ants do have some level of consciousness, though I lack any hard evidence.
—I appreciate your caution regarding anthropocentrism, as well as share it. Regardless of this inherent challenge, we must try to do our best. We can at least call each other out when we think we have a case to make however.
Thank you, Eric, for sharing your views, I find them interesting. I also read Mike’s post you mentioned and found it to be a fantastic article. I do, however (as usual) have some critical thoughts on these notions:
It seems you are talking about evolution as if it has a purpose of its own. As you say,
“..consciousness was then required because in a complex environment, evolution couldn’t write effective enough general instructions to deal with various specific situations. ”
For me, evolution does not seem to have a purpose, it is more of an explanation for what we observe. What we observed is that given an environment, those most adapted to it survive compared to those who are not. For this to happen there has to be variation between individuals which is brought about by random mutations and errors in the replication of genetic material. In this framework, I see consciousness as likely to have been developed from a mutation or combination of mutations which provided some survival advantage in specific circumstances. I don’t see consciousness as being required either because if we look at all creatures, perhaps the most successful (in the evolutionary sense) are non-conscious simple organisms like bacteria.
Like we observed in the video link you provided it appears bacteria are very capable of adjusting to various specific environments (even novel ones like being exposed to space).
The other thing I’m wondering about is when you say “..building a kind of computer that functions on the basis of feeling good and bad..” How do you develop this sense of good and bad in the first place? I feel a fundamental of consciousness is the ability to feel. If by good and bad you mean pain and pleasure etc. then these are conscious states. That is where the hard problem of consciousness lies, to explain these phenomenological states.
I’m grateful for your kind words on my post. Thank you!
My only comment on this discussion is a point about language. When talking about evolution, it’s exceedingly difficult to avoid teleological language, even when everyone in the discussion knows that we’re talking about random mutation paired with unguided natural selection. To avoid every proposition being mired in pedantic qualifications, we have to try to see the underlying principle being discussed. Evolutionary biologists often talk about species “innovations” metaphorically. It’s just a way of talking that makes the discussion easier.
There’s a concept called teleonomy, the *appearance* of purpose, which is what most of us are describing when using this way of talking.
Of course, if we do detect that the other person has crossed into advocating for some form of premeditated forward looking design, then we should call them on it. But aside from advocates of theistic evolution or intelligent design, I rarely run into anyone actually arguing for that.
Thanks Mike for pointing that out. I try not to but it seems like a bad habit and I always end up getting bogged down with the details too much, apologies. I totally understand what you are saying. But perhaps what I really meant to point out was that sometimes we may see consciousness as a destination, or as being the top ranking thing, or the ultimate achievement, or as if it had to happen. I don’t see it that way, rather as a random event with some value (akin to many other random events). Perhaps most of us already believe that, but I felt like it should be made clear so that we do not use its inevitability (which doesn’t seem to be true) as justification or backing for its existence.
Thanks Fizan. I agree completely that consciousness or intelligence isn’t an inevitable result of evolution by any means. I do think it’s adaptive, but so is an elephant trunk, which nobody but an elephant would regard as the pinnacle of evolution. I often point this out to people who think we can grow an AI using artificial evolution. So I think we’re completely on the same page.
No time for a full comment, but I do lean a bit to your suspicious side regarding anthropocentrism. This podcast is of a scientist who has been dispelling the notion of evolution as “progress” when it’s better considered as “change”. Mike even bought the book.
Now that I have a moment, yes I believe that your worries about anthropocentrism are valid. We must not let ourselves forget that we are human rather than the objective observers that would better suit science. Thus convenient language should be problematic at least subconsciously. But then as Mike observed, we also need to speak without getting bogged down stating endless qualifiers. Still I do like to at least see some qualifiers from time to time for general grounding. That Ben-Jacob video about bacteria unfortunately provided nothing of the sort. Conversely at about minute 9 of the podcast that I provided, Dr. Suzana Herculano-Houzel demonstrates how science must constantly fight suspect human intuitions by means of evidence. Considering evolution as “progress” is the standard sort of anthropocentrism that we need to be wary of, and I consider far worse to still fly under the radar today. The following is probably a better link to Dr. Ginger Campbell’s show. http://brainsciencepodcast.com/bsp/2017/133-herculano-houzel
When we speak of evolution we generally talk deductively rather than inductively — the end result is known, though we want to know how things got this way since that should help us understand how things work. Here you’ve asked me about how we came to feel good and bad. I have no opinion about that however, as well as little such curiosity. I presume it arrogant of us to think that human engineering capabilities are yet anywhere near close enough to evolution’s for such an answer to be developed. This to me is the true hard problem of consciousness.
As I understand it David Chalmers didn’t stop with this “how” of consciousness however, but included a “why” of it as well. I believe that I have a pretty good way to address that however.
The hypothesis is that non-conscious programming hits a wall in forms of life that face more diverse circumstances. While it could effectively program microorganisms, fungus and plants, at some point things went wonky, whether for insects, fish, reptiles, or whatever. (My suspicion is insects.) So what can’t normal computation do very well that conscious computation can?
Normal computation for more “advanced” forms of life should have problems dealing with situations which they weren’t specifically set up to address. (Just ask any robotics engineer.) There must be too many different options regarding what might be done, such as go out to lunch with a friend, or read a book at home. So here evolution seems to have cheated. It said, “In certain ways I’m no longer going to program you. Instead you will account for this yourself, and do so on the basis of the punishments and rewards associated with your existence.”
(To ground this I believe that there must have been non-conscious life that carried around something inside that could feel good and bad, but without any functional effect. Over millions of years such a feeling must have randomly gotten hooked up to become the full secondary form of computer that we know as “consciousness”.)
Hi, Eric. I think I understand your ideas much better now, thanks for the clarification. Just focusing on two aspects, firstly do you think then consciousness was a sudden switch ‘on’ or a gradual build up of events?
Secondly say a group of (nonconscious) insects gets separated from their natural habitat, into an unknown environment, how do you think they would deal with it? Isn’t there a chance given enough time they would gradually evolve to become better adapted to deal with that environment yet remain nonconscious?
Great questions Fizan!
Whether consciousness is switched or not will depend upon the definition that’s used for the term. There isn’t a true definition for it, or any other term (given my first principle of epistemology). But consciousness most certainly is switched from the model that I’ve developed. The existence of any of the three forms of conscious input provided in my graph, the single form of processor, or the single form of output, are switches which mandate the existence of consciousness in the human. Here it is again: https://physicalethics.files.wordpress.com/2017/04/screenshot_2017-04-19-07-29-45-1.png
Of course consciousness in something else may require classifications that are more associated with that entity, though this model at least provides the theme. Furthermore note that any one of those five shouldn’t bring functional consciousness. For functional consciousness to emerge there would at least need to be some form of motivation input, a processor that can interpret such input and construct scenarios about how it might promote its “self” interests, and some form of output from which to potentially do so. Though initially the emergence of such elements must have been effectively useless, theoretically this second form of computer must have come to exist in a functional sense at some point.
On your insect scenario where they’re put into new environments and yet survive, clearly ants do this often enough, and sometimes thrive. Do they also lack all consciousness as I’ve defined the term? That’s the widespread presumption today. Perhaps for political reasons Feinberg and Mallott explicitly state that their own “Why?” of consciousness, which concerns the use of distance senses, doesn’t lead them to believe that insects have consciousness. Here I consider them to have made a huge blunder. Don’t flies process input information regarding what’s happening around them? Even our idiot robots have distance senses (which in truth is a pretty good reason to believe that F&M should keep searching for a better “Why?” of consciousness).
Anyway if ants do not have any consciousness as I’ve defined the term, then they survive and evolve through the normal sort of computer that we build. I can’t say. But at some point normal computers must have hit a wall — otherwise we wouldn’t be conscious. My own hypothesis is that with more and more diverse contingencies to deal with, at some point evolution must have been incapable of keeping up with the programming demands. Thus I think it said, “Screw this! Here is punishment and here is reward. Now you must figure things out somewhat for yourselves given the personal consequences to your existence.”
I should clarify regarding this change however, that I suspect that less than one thousandth of a percent of the mental processing which occurs even in the human, is conscious. I believe that we’re mostly just standard computers. But then why would you never have heard such a thing before if this is the case? Well if 100% of the mental processing that we experience is conscious, and zero percent if it isn’t conscious…
Yes this could be another example of systemic anthropocentrism.
On Feinberg and Mallatt, I may have accidentally led you astray at some point on their reasons for doubting insect consciousness. They do see distance senses as an important hallmark in the fossil record for when sensory or primary consciousness came on the scene. Of course, many insects actually have distance senses, so excluding them as a class for that reason wouldn’t make much sense.
Their actual reasons for doubting consciousness in insects has more to do with the size of insect brains. F&M reason that there may not be enough substrate for the minimal neural layers they associated with consciousness. I suspect they either missed or didn’t have access to recent studies showing cost / benefit trade off decision behavior in fruit flies. If they had, they probably would have at least mentioned it when they discussed possible consciousness in non-vertebrates.
While we always have to be careful not to project our own mental scope on creatures, the fact that insects thrash and buzz frantically when sprayed or stuck in a bug catch sure gives me the impression they have affective feeling states.
Mike and Eric,
I think the major difference in my opinion is that I don’t see us or other living beings having computation as we understand it at work. Yes computation can be a good analogy for us to help make sense of it. But the facts as I see them is that it isn’t computation but ‘something else’ that’s happening.
That’s why our computation analogy doesn’t hold up all the time. That’s why (as Eric says), normal programmed computation isn’t able to deal with situations they weren’t specifically programmed to. Whilst life generally is able to deal with such situations, the difference is that computers are programmed and designed to do specific things, whilst life is not. Life just evolves.
We can attempt to replicate it (may be in AI research etc.) but there will (in my opinion) always be a disconnect because a copy isn’t the original thing, it is something else entirely, the similarities we may see are only good for us (in our own perspective). What seems to constrain us is us being us.
I gave my reasons for computationalism in my recent post on information. For me, the “something more” for animals with sensory consciousness are imaginative action scenario simulations, which seem inherently computational. As to life being able to deal with situations, it’s worth being aware of cases where that isn’t true, such as a deer caught in the headlights of an onrushing car, mosquitoes flying into lighted bug zappers, or something like this: https://www.livescience.com/16331-discoverers-beetle-beer-bottle-sex.html
Ultimately this is a question that will be answered with more neuroscience. Either the computational outlook will continue to be fruitful, or it will start to fail and some new paradigm will be needed. The more neuroscience I read, the stronger computationalism (broadly construed) looks to me, but science is an inductive exercise, and new observations could always overturn the apple cart. Only time will tell.
Mike that’s probably the divide between us. Only time will tell or it may never – I am always skeptical (and agnostic).
The examples you gave do not seem to be the wall Eric was referring to because most conscious humans end up in accidents, being deluded or illusioned or make stupid mistakes all the time as well.
For me, it seems very straightforward that analogies are just analogies and not the real thing. I feel we will get better and better in our analogies by learning from life and then incorporating it into our models, but ultimately they will still be models and mimics (never the less I am fascinated and look forward to this endeavor). Unless we can learn everything (and incorporate it) our models will be incomplete. From what we know, there is always something more to learn.
Hence I am not optimistic of the current neuro-science being able to solve the hard problem of consciousness with the current direction it has because it is too much entrapped with a material and reductionist model. Such models eventual lead to cop outs such as Panpsychism. Although I still look forward to the progress because we never know, perhaps at some point we may trigger some self-organizing phenomenon. (That’s why I strongly believe we should attempt at reproducing consciousness through AI, that would be the best type of evidence).
I am also optimistic about science overall having paradigm shifts which may enable a more clearer understanding. But there is no transcendent reference to state we will definitely know it eventually either.
I’ll begin by disclaiming that I’m naturally biased towards the models which I’ve developed. Furthermore I will not deny that I’m jealous of F&M — they are thought of as successful theorists, whereas I am not. I say this because I consider it important for my own growth to acknowledge my many flaws. Nevertheless the greatest theorists are able to develop effective models despite the flaws that they have, whether through luck, skill, or whatever. Hopefully my own biases correspond with more useful models, since I’m damn sure not actually objective. So with that in mind…
If F&M theorize that distance senses were instrumental to the rise of consciousness (supported by fossil evidence from the Cambrian and such), but also use a disclaimer that much of life today without consciousness harbors distance senses, since they lack sufficient neural connections to support it (or whatever), then I don’t see how their theory can be considered useful. Obviously it would be far better for them if they were able to provide a theory that lacked such a substantial caveat. (But keep in mind that I cheer their failure given that I have a competing theory, and I’d absolutely love for it to receive no less scrutiny that they get.)
Let’s say that they do retract their belief that insects aren’t conscious (which was probably politically motivated anyway given the general perception that useful definitions for consciousness exclude insects). Would their distance senses theory from which to explain the rise of consciousness, then become sound? Well it seems to me that pretty much no one considers the robots that we build, which generally use distance senses, to be conscious. I haven’t read their book so it may be that I don’t understand their position sufficiently, but otherwise it seems to me that they should look for a more unique aspect of consciousness to theorize what it’s good for. My own suggestion is “autonomy”. Still I’m quite grateful for the work that they’ve done, and especially since your five post analysis of their book ten months ago helped me find you.
I am no less excited about meeting you than I was with Mike. Your skepticism is something that I’d have you keep to your end — it will not fail you. But as for your agnosticism, I hope that you will some day find ideas that you consider powerful enough to say “Fuck it!” and go all in. It’s clear to me how addicted you are to this stuff, but there is no “Church of the Agnostic” for a very good reason — “whatever” evokes no passion. Find your passion. Perhaps my second principle of epistemology will help provide a path. It goes like this:
There is only one process by which anything conscious, consciously figures anything out. It takes what it thinks it knows (evidence), and uses this to test what it’s not so sure about (theory). As evidence continues to remain consistent with theory, theory tends to become nothing more than believed.
Thus my counsel is that we get nowhere by searching for truth. Instead we must attempt to develop effective beliefs.
Thanks Eric for the kind words. I fully agree with your last statement that we must attempt to develop effective beliefs. Though I must say I am passionate in my indecisiveness, as I find it to be the ultimate honesty (and secretly see it as the path to utopia).
On F&M, distance senses, and robots, I think it’s important to understand their reasoning on this. Evolution (metaphorically speaking) can’t design complex systems from scratch. It must implement them using step by step adaptive improvements. It can’t look ahead and build a sophisticated capability on one component that won’t be adaptive until the sophistication of another component increases. Its systems must be adaptive at every stage.
F&M’s point about distance senses is that they’re not adaptive unless paired with some kind of exteroceptive capability, an ability to model the environment. It provides an organism little adaptive value if it has sophisticated eyesight without the ability to do something with the signals from that eyesight.
F&M call the exteroceptive capacity “exteroceptive consciousness”. You might argue that this isn’t consciousness yet, and I have some sympathy with that argument. As we’ve discussed before, it’s a matter of definitions. But it’s worth noting that exteroception itself isn’t all that adaptive if it doesn’t include at least an incipient sense of the organism’s own body in relation to the environment.
In the comparison with robots, we have to keep in mind that robots are engineered systems, not evolved ones. They don’t have evolution’s constraints of having to work in incremental steps where every step must be adaptive. The constraints here are in what we know how to design at this point.
And a camera on a robot can be useful even if the robot can’t use the information for exteroception, because it can transmit it to us for that exteroception.
But as I noted in my series on F&M’s book, a case could be made that self driving cars have a type of exteroception. The car doesn’t have interoception or affect awareness (feelings), so I don’t think the word “consciousness” is productive yet, but they’re a rung higher on the consciousness ladder than most automated systems.
So you’re passionately indecisive? Well sure, I’ll go along with that in an ontological sense. Sign me up for your church of “We just don’t know”! I’ll need convincing regarding any associated ‘utopia’ however. (But yes, perhaps we should keep that part quiet.) Most importantly however we seem to be square regarding epistemology — there’s nothing but “beliefs all the way down”!
I think that I see how our sense of evolution is currently just a bit different. You’ve mentioned incremental steps that are adaptive at each stage. While I certainly don’t deny that such steps must occur, in the random process of evolution we should also expect that things would not always work out quite this simply. We should expect that from time to time there would be various things which are neutral or even somewhat evolutionarily negative for a species, but that hang on for a while simply given the billions and billions of opportunities for such traits to randomly appear and disappear. I believe that they’re known as “spandrels”, and that from time to time they should find effective uses nevertheless in the end.
Consider a monkey type of creature. Over its millions of years of evolution we’d expect various isolated traits to emerge and die. Let’s imagine genes which randomly provide a strange little finger at the hip for some, and that it’s neither useful nor much of a detriment. Furthermore let’s say that this particular species of monkey stores water in gourds for its journeys, even though carrying the gourd requires the use of one of its two hands. But then it turns out that the monkeys with the otherwise useless hip finger are able to attach a water gourd to it and so use both of their hands for their journeys. So even though this spandrel was a full waste until such a use was found, there would now be some potential for the trait to become universal for the species.
During the Cambrian we should have had life with central organism processors from which to algorithmically factor input information for associated output. I have no reason to think that such life should not have developed exteroceptive capabilities, or the capacity to model their environments non-consciously just as our robots model the circumstances under which they exist — one teleologically designed (robots), while the other is designed over great periods of time randomly (life). Distance senses, such as from detected light, should have provided crucial input information to algorithmically process for associated output function. The self driving car does this, as well as a simple system which is able to effectively denote when something human-like enters a camera’s field of view.
Now let’s get to consciousness (as I’ve defined the term). Apparently these Cambrian biological robots must have hit a wall at some point, since a conscious form of computer did eventually evolve as well. My theory here is that evolution couldn’t program these creatures non-consciously quite effectively enough, I suspect because their environments were too open for straight programming to do the job (even though the former was fine for things like microorganisms and plants). For these forms of life a personal entity from which to harbor autonomous function seems to have been an effective way to go. Thus the rise of teleology.
Note that initially there should have been spandrels within various forms of life that provided only individual components of functional consciousness, but without any practical effect. Thus it could be that something could feel pain, though with no ability whatsoever to diminish it or thus to potentially be effective. Also there might have been something that could consciously operate muscles, but without any motivation from which to incite these muscle movements. So I’m saying that at some point things must have randomly come together to create a primitive example of consciousness, that was also effectively functional. Furthermore this might have happened and died off thousands of times without actually propagating. But apparently at least one time there was a functional consciousness that did succeed, which is to say that a second mode which constitutes all that we humans know of existence, did occur.
So would you say that evolution needn’t be just made up of small steps that are each progressive? And would you say that information such as that which occurs through light, might effectively be used by something which is not conscious?
As I understand it, spandrels definitely exist, but for them to persist, they have to be neutral in terms of their effect on fitness. In other words, they can’t be a liability. A spandrel that is expensive in terms of development maturity time and energy consumption, would be selected away. The conscious subsystems you describe seem expensive to me.
I failed to talk about affect consciousness in my last reply. F&M based their assessment of affect consciousness on behavioral tests. (I posted the criteria somewhere on this thread.) They then morphologically compared the modern species that pass the tests (all or most vertebrates) to the fossil record, and conclude that affect consciousness goes back to the Cambrian.
Could exteroception be paired with a reflex only system? It’s conceivable. Ants and some other insects might be examples. But the vast majority of animals with high resolution eyes seem to display affective states. The more sophisticated the distance senses, the less plausible an exteroceptive only system seems.
“So these structure would unlikely have evolved to bring about operant learning.”
That could be, although I’d be skeptical of single celled nonreflexive learning until I saw the details and the results reproduced. That said, nonreflexive operant learning is just one of the criteria proposed for affect awareness (which I think exists to assess the results of imaginative simulations). Again from Feinberg and Mallatt:
“Criteria for operant learned behaviors that probably indicate pain/pleasure (or negative/positive affect)
• Learning a global, nonreflexive operant response based upon a valenced result
• Behavioral trade-offs, value-based cost/benefit decisions
• Frustration behavior
• Successive negative contrast: degraded behaviors after a learned reward unexpectedly stops
• Self-delivery of analgesics, or of rewards
• Approaches reinforcing drugs/conditioned place preference”
Feinberg, Todd E.. The Ancient Origins of Consciousness: How the Brain Created Experience (MIT Press) (Kindle Locations 3662-3672). The MIT Press. Kindle Edition.
I tend to focus on the first two. The third seems to require a lot of interpretation, and the last three seem like special cases of those first two. But by far, the second seems like the most important.
I’m not an expert on operant conditioning, and what I say here may be off track from your original intent a bit. I think your point was that imagination is not a necessary component in operant conditioning. But the title of your post reminded me of the work of Dr. Eschel Ben-Jacob, and you may enjoy these links that show interesting strategies for problem-solving in bacteria. I won’t say it definitely shows the use of imagination, but it shows a very interesting “something” related to these questions of consciousness, information processing and learning. I thought this paper was interesting because he discusses the strategies bacteria use to solve problems for which they do not have pre-detremined genetically stored answers at their disposal.
Link to summary of one of Dr. Ben-Jacob’s papers
Link to the citation
I also think there is a talk on Youtube that Dr. Ben-Jacob gave on this topic.
Thanks for sharing. It looks like an interesting paper and I will definitely read it. Thanks again.
This seems to be the video that Michael was talking about, and I found it fascinating. My interpretation is that the genetic material of each bacterium is so advanced that it’s set up to produce the proper chemicals to make it do what it does by altering chemistry on the fly given what is taken in. For example it can “smell” food in the sense that such particles are taken as input which is then algorithmically factored for output. Furthermore chemical signals are produced which lead others of its colony closer or further from it given the information that it processes Thus for each bacterium genetic material serves as a true non-conscious mind that causes it do anything from swim in a given direction, to reformulate itself for long term hibernation. And as displayed, the networking of each bacterium effectively produces a combined non-conscious mind that can be extremely difficult to kill.
(Earlier I should not have said that the paramecium has no non-conscious mind, but rather that it has none beyond its genetic material. This is probably quite advanced, at least when compared against our computers.)
Thanks Eric for sharing the video link ( and thanks again Michael for sharing the original article). Just finished watching it. Must say it was more than what I was expecting, Dr. Eschel Ben-Jacob has done some ground breaking research. It goes to show to some extent our anthropocentric bias in measuring intelligence. The concept of swarm intelligence in bacteria is fascinating as well and reminded me of organized ant colonies.
I agree with you Eric that the bacterium can do processing at the genetic level. And not just at the genetic level but at the cell level in colonies.
> In one of the slides, he demonstrates that the bacterium uses a smell gradient to direct its movement. It makes a movement first (I presume ?randomly) then measures the gradient, stores this information, then on the next movement it measures the gradient again. If there is an increase, it moves further in that direction.
This is an interesting event worth considering and raises many other possibilities and questions. Firstly how does it store the information? (it has memory in what form). Secondly to calculate the increase or decrease of gradient what processing (mathematical) does it use i.e. how does it manage to calculate? Thirdly when it does calculate an increase in gradient how does it ‘know’ which direction to go next i.e. how does it have a sense of direction?
For the third point, my presumption is that each direction of random movement has its own signature pattern and it reinforces the use of the pattern which resulted in the increased gradient in the previous movement. But (if so) this begs another question, how does it ‘know’ which pattern of movement to reinforce (perhaps ?reflexively).