Researchers recently discovered of a >1.8 million-year-old little finger. They published their findings in the prestigious Nature Communications Journal. Interestingly, on extensive analysis and comparisons with known species, this bone was found to fall within the Homo Sapiens (modern man) category.
However, as the bone is dated to be >1.8 million years old in their discussion, the researchers exclude it from being placed in the Homo Sapiens category:
Collectively, these results lead to the conclusion that OH 86 represents a hominin species different from the taxon represented by OH 7, and whose closest form affinities are to modern H. sapiens (Fig. 3). However, the geological age of OH 86 obviously precludes its assignment to H. sapiens..
In the spirit of science, the discovers have left the final conclusions open to further evidence. However, others claim this finding is another drop in a growing body of evidence which challenges the conventionally accepted theories of human origins.
Even in accepted archaeological science the date for when we ‘anatomically moderns’ first came into existence has seen a constant push to earlier and earlier times. Current estimates are around 200,000 years edging closer towards 300,000 years (or even earlier). However, some archaeological evidence doesn’t seem to fit the prevailing narrative. These types of finds hint at human (modern) origins which go back millions of years as opposed to hundreds of thousands.
Domínguez-Rodrigo, M., Pickering, T., Almécija, S., Heaton, J., Baquedano, E., Mabulla, A. and Uribelarrea, D. (2015). Earliest modern human-like hand bone from a new >1.84-million-year-old site at Olduvai in Tanzania. [online] https://www.nature.com/ncomms. Available at: https://www.nature.com/articles/ncomms8987
Well I think there are a couple of things to remember here. The first is that evolution is theorized to be a progressive process, so there shouldn’t be a distinct line anywhere but rather countless individual changes. Then secondly, the Homo Sapiens term, like all other terms, is whatever we define it to be. If scientists decide it’s useful to mark this distinction at 300,000 years, well that’s just a convention anyway rather than some kind of truth out there. If they don’t see any inordinate differences at this point rather than others, then yes, take this category back or forward to where a significant difference becomes apparent. I suspect that the shape of a little finger bone isn’t going to alter the perspective of many such professionals, even if similar to ours.
I personally like to distinguish the genetically modern human by its development of oral language, though a time frame about that hasn’t been simple to determine. I consider language not just an amazing form of communication, but a potentially quite advanced mode of thought.
Fizan I’m always impressed with how professional and clean your posts happen to be. Can you tell us anything about the guy pictured at the top for this one?
Thanks Eric, I am trying to give some consistency to my articles. Due to copyright issues I can’t usually share the pictures most relevant to the topic. However pictures do get our imagination running more so than words, so I chose this above picture which is available in the public domain and is of a member of a modern hunter gatherer tribe in Africa.
I completely agree with you that a little finger isn’t going to change the existing structure of the narrative. Nonetheless it is interesting and worth noting to appreciate exactly what you are saying the reality of things isn’t clear cut by any means and we usually use arbitrary cut-off points. If we consider a little finger as little evidence that also seems to reflect on how we subjectively view things, a little finger is little only to us.
What is more interesting is the claim by some that this is just a fraction of the evidence. There is much more dramatic evidence out there as well. Which I will write about but as can be expected more dramatic evidence causes a more dramatic rejection. I want to clarify that I by no means endorse those views either but I do like to reflect on opposing narratives.
Coming back to the little finger. The date of it is whats most significant about it. It is nowhere near when the modern human is estimated to have evolved. Here’s where I become a bit skeptical, there is a few ways in which such evidence can be interpreted. One is to consider that modern-human like hand morphology evolved prior to the modern human evolved. However, what could have led to the same morphological hand adaptations in a relatively primitive hominid species which would have existed >1.8 million years ago? And so far we can’t identify which hominid species it belongs to either, comparing it to all we theories to have existed at that point in time.
I don’t want to get too pedantic. What I’m suggesting is that if the existing narrative was that modern humans have exited for more that 1.8 million years this exact same sample would have easily been placed within that category. So what knowledge a piece of evidence provides us seems to be greatly depend on our existing narrative (expectation). On one hand it could be the sole evidence of a new species and evolutionary mechanisms whilst on the other it is a simple human bone. Take a hypothetical, lets say, the first ever archaeological dig found this piece of bone, and we were able to date it to 1.8 million years. Where would it have led us? I think it would have been thought to belong to a human and our origins would have been estimated at at-least that long ago. And, when we did start to find other hominid species this bone still wouldn’t have fit in. And, If we also found the other evidence (the dramatic one, which I’m going to write about in the future) it wouldn’t have seemed as dramatic because it would have been fitting the developing narrative.
What I like most about your perspective here is that you’re not endorsing per se, but rather assessing ideas which challenge the status quo. Of course there are lots of loopy ideas outside of the establishment, but safe ideas (and obviously group think) also won’t get us where we need to be. I eagerly await the more dramatic evidence that you’ve mentioned (though I also grant no free passes).
For the “anatomically modern” concept, I like to begin from the other side of things. Regardless of our evidence, where is a good place to make such a distinction? Well if a baby from an earlier era were somehow born to a modern human family, could it fit in somewhat? Or instead would such a subject become more like a freak, or perhaps a pet? I admit that this test is a bit stiff, since obviously even modern humans sometimes become freaks or need to be taken care of like pets, but not inherently so. I’m asking if such a subject could ever be considered essentially one of us? If under the proper conditions one of these ancient humanoids could be taught to function somewhat as we do, then I’d consider it useful to call them “homo sapiens”. This is the narrative that I consider useful, and independent of any fossil evidence.
So what would it take to potentially fit in? Maybe a proficiency in creating and using tools, or perhaps fire? I don’t think so. A similar appearance would probably help, though as I see it this shouldn’t in itself get the job done. The only way that I believe such a being could function somewhat like us, would be to have the potential to gain at least some capacity to understand (if not so much speak) a modern human language. And for this I suspect that reasonably advanced languages would have needed to evolve back then, with associated cognitive abilities as well.
Coincidentally (or perhaps not so much), Mike and I have recently been talking over at his blog about a “secret sauce” which permits the human to be human. If you check the “Recent Comments” list between us over there, I’d love your thoughts. I suppose that the fossil evidence that you’re going to tell us about won’t get into language, though I will be interested in the narrative that these divergent scientists take, whether I consider it helpful or (more likely) I don’t.
(Just noticed Mike’s article below on homo erectus, which will certainly interest me.)
The timeline of human evolution is constantly being revised based on new findings. If we start finding lots of million+ year old evidence of modern humans, we should be prepared to revise the timelines. But I don’t think a little finger that appears to be modern should be, by itself, enough to make such a radical revision to the timeline.
Interestingly, when anatomically modern humans arrived is a matter of interpretation, by what we mean by “anatomically modern”. In fact, there are substantial differences in anatomy between the remains from 200-300 thousand years ago and modern humans. We tend to regard them as modern because the volume of their brain case was similar to ours, but the shape of their brains was very different until around 100,000 years ago. Also remains from more than 100,000 years ago tend to have the prominent brow usually associated with pre-modern humans. And evidence for behavioral modernity only arises in the last 50-80 thousand years. (Adding to the confusion are the finds that the oldest cave art may have been drawn by Neanderthals.)
All of which is to say that any line drawn between modern humans and pre-human is, to some extent, arbitrary.
Yes I agree it is very messy. However, it does seem that cranial capacity is not the only criteria to define anatomically modern humans. In this study they present ranges with centiles of different morphological aspects of a little finger such as it’s curvature and even on these discrete measures there is a rough distinction that can be made between most hominid and ape species, although admittedly there is a degree of overlap in the extremes of the range but the median values are usually defined and different for different species on the scale.
As I said to Eric there is other much more dramatic evidence out there as well which I will write about in the future. Do look at my reply to Eric for more information.
Just in case you haven’t seen this article yet…
Thanks for the link Mike. I haven’t read this before, seems interesting.
Mike I really enjoy the homo erectus article that you’ve provided here, and yes I suppose because I interprete it to corroborate the position that I’ve recently laid out in a discussion with you at your site. I wonder what your current thoughts are about language as an invented tool given sufficient cognitive capacity?
To be certain, the position of Daniel Everett is radical, but to me it also seems sensible. How could homo erectus have reached the places that it’s known to have reached, without boats? And if it was able to build such tools, as well as make these sorts of journeys, it should have required not just the ability to communicate, but also a form of language from which to think (which is to say to plan out what it was doing), as well as as to express such thoughts back and forth. Symbolic representation (or a semiotic path), may have been the key which unlocked language to a creature that has sufficient cognitive capacity.
My guess is that if a baby homo erectus from maybe 400,000 years ago were raised in a human family today, that it could potentially learn and use modern language, even if it didn’t have dedicated oral tools from which to express this language very well. (This would be given that its ancestors had been using language for over 600,000 years before that point.)
(Fizan, one of my comments seems to have gotten lost. Let me know if you can’t find it.)
Everett’s views resonate with a conviction I’ve had for a long time, that language is far more ancient than most anthropologists think. I’ve long thought that it goes back to at least Homo heidelbergensis (which Everett lumps in with Homo erectus, a move I have no real position on). My view is that language evolved gradually, in stages. I suspect erectus did have language, but I also suspect it was far less sophisticated than sapien or neanderthal language. (I do think it’s likely a neanderthal baby could learn sapien lanuage, but we really just don’t know.)
But I think what appeals to you is Everett’s view that language is just a matter of computational capacity. Late in the article he asserts that horses and dogs can interpret symbols, which seems to bolster his claim. Fascinated, I took a look at the linked studies. The horse one was really just horses operantly learning to select certain symbols for specific results, and the dog one seemed more about emotional reactions to particular noises. Neither seemed to show the animals actually interpreting the symbols. (I haven’t looked at the ape one yet, but I’d expect them to be better candidates.)
Here’s the thing. As a long time professional programmer, I can assure you that functionality is *never* just about capacity. If it was, we’d have stopped programming computers in 1952 and just watched while functionality came for free with each hardware upgrade. Likewise, functionality in brains doesn’t come from just more neurons, but with additional evolved neural circuitry. So it’s true that language requires more neural capacity. But it’s also true that the right functionality has to be there as well.
But language, the ability to access our conscious experience and associate aspects of it with symbols, is a very complex capability. Most complex adaptations have simpler predecessors. What was language’s predecessors? My current thinking is that it required the ability to access our conscious experience, period, or at least in a much more sophisticated manner than most animals can. Based on the other studies I’ve shared with you before, we can see signs of that predecessor in other primate species.
This isn’t to say definitively that no other animal has access to its own experience. It might be that non-primate access is just far less developed, or maybe it’s of a different enough variety that our primate centered tests are missing it. We only know that we can’t confirm it outside of primate species. (Some studies claim to detect it in dolphins, but their methodology is controversial. There’s also a study that claims to find it in rats, but it’s widely acknowledged that, similar to the horse study, operant learning is what’s really happening.)
All of which is to say that I think Everett can be right about erectus having language, but be wrong about what is needed for language. I’m curious to see what paleo-anthropologists have to say about the first assertion, particularly in terms of the evidence he claims to see. But in my view, Everett didn’t justify the second one. (Which actually makes me nervous about how good a job he actually did with the first, which I don’t have the expertise to assess.)
It sounds like we’re in agreement regarding the ancientness and importance of language. Furthermore our agreement grows once I remove your misconception that I consider computational capacity to be important in itself. I was actually referencing cognitive capacity, or the computation associated with conscious function, and then in reference to a minimum required for language. There will be a minimum requirement for containing a gallon of milk, for example, and we agree that a 500 gallon container is mostly wasted for such a task.
Now that you and I (and I think Fizan) may be considered “rebels” somewhat, let’s try not to support presumably bad science in as many ways that we can. When you command a dog to sit, and it does exactly as instructed, is there something less than “symbolic interpretation” here? We can certainly say that it has the “sit” symbol under its conscious grasp, for it anyway (though I suppose that a given dog might understand the term for humans as well). Regardless it will have all sorts of other symbolic understandings. But even if it can comprehend “Eric”, “vet”, and so on, we also know that it doesn’t grasp the English language in general. Eliminating false differences should help us reach the real differences.
I’ll also stand up for operant learning, since the consciousness model that I’ve developed functions entirely through a punishment / reward dynamic. Here we interpret inputs and construct scenarios in order to figure out how to promote personal value. Thus I submit that all conscious life does actually have access to its own experiences. How could we say that something in pain, doesn’t access it? Of course it does, and this access is theoretically how consciousness functions.
So given access to what is felt, let’s get to how language might have evolved. Everett believes that tools were probably important, and perhaps so. I can see how a society that builds spears and has generally been using them for countless generations, would find it useful to have a spoken symbol for this sort of instrument. Furthermore I suspect that many social animals today have terms that they use, such as for predators. So here I’m not talking about anything all that strange.
The first stage of language should thus have been stand alone nouns for all manners of persons, places, and things. But this should have still been a bit frustrating and primitive, since it depends upon a listener taking a noun from the proper context. So I suppose that the second stage would have been verbs, such as “Come [whoever]” and so on. Similarly we would expect symbols for adjectives and on and on, increasing again and again into modernity.
Given that all conscious life has access to what it feels, I don’t see a need for something beyond general cognition and general evolution required for language to evolve. Though an incredible tool for us, language should have still had humble beginnings.
Rather than saying sit you could also show a dog a spoon or a light and it will sit. And pretty much anything else which you train it to sit to. What we do in operant conditioning is to successively reinforce desired behavior to certain stimuli using rewards. To me operant conditioning seems to be independent of conscious ability. In fact it could be opposite to it. A human has much more capacity to consciously overcome/ resist the effects of operant conditioning as compared to a mice for example.
Even simple organisms such as bacteria tend to show evidence of operant learning (as we discussed in a previous post here). I think all that is needed for operant learning to occur is some basic degree of computation ability. This would include a basic feedback mechanism and a desired direction.
Coming back to the dog example is their behavior of sitting to a stimulus equivalent to a symbolic interpretation? It is an interesting proposition.
If we are saying symbolic interpretation involves access to conscious experience and then associating aspects of it with certain symbols, do you think a similar thing is happening here. I suspect not. Even if you suggest a dog has access to it’s conscious experience. Operant learning can happen completely unconsciously.
When we say sit, it has a symbolic meaning to us. However can we also say any stimulus to which the dog sits also has a meaning for it? Meaning should be consciously accessible. It seems the dog just unconsciously sits to any number of stimulus we want it to.
Restored your comment, don’t know why for some reason my spam filter had blocked it. Thanks for pointing it out to me.
Your observations permit me to go a bit deeper into this language business, so I appreciate the opportunity. Let me first clarify here that I’m not actually a behaviorist. There are simply elements to behaviorism that coincide with my own theory. Then secondly the “operant conditioning” term as I was using it, does not apply to anything that lacks consciousness, such as a computer or bacteria. Of course I don’t mind if you or others use the term that way, though I wasn’t, and I didn’t take Mike to be. Operant conditioning has no “true” definition, though if people commonly use it that way, then I shouldn’t have used it at all. (“Classic conditioning” seems more appropriate there to me, not that I have much use for either term, really.)
I’m saying that the dog is able to acquire symbolic representations, just as the human is, and about as simply as this: Associations are naturally made with sounds, images, smells, and tastes during conscious function, and such associations themselves can potentially become symbols. A dog can know me by means of seeing me in person, or through a picture of me, or through the smell that I emit, or the sound of my voice, or the spoken “Eric” term, and on and on. Each could thus be taken as symbols of me. These are acquired, I think, the same way that a human might acquire such symbols — through conscious experience. But just because a given dog might know a spoken name for all sorts of people, places, and commands, this does not mean that it’s able to grasp the English language. These understandings will simply be some portion of how it makes sense of its existence. The symbols merely “communicate” to it. And even with things like yelping and angry barking, we have little evidence that back and forth oral dog communication is all that involved.
When you say that for the dog “Operant learning can happen completely unconsciously”, I do agree, but of course that’s the case for us as well…
Maybe the most effective way to challenge me here would be to argue that dogs don’t have access to their own experiences. But then if they do have experiences, what evolutionary sense would it be for them to have no such access?
I agree with you that dogs (and other animals) can have associations and those associations can be grouped together as representing aspects of a single thing. As in your example these can be the smell, sight, sound of a person etc. These sensory associations are concrete in that they are actually part of that person rather than being a representation of them.
I think by symbolic representation as we use for language what we are saying is, imaginary conscious representations which take the place of the concrete sensory experience of things. This process infuses the sensory experience with imaginary meaning. Without such a process there can still be associations like for example a face-recognition camera can also associate aspects of what makes a face. It can differentiate this association from the background, the body and also from other faces. There is no reason to believe, the ability to make such sensory associations on their own has anything to do with consciousness or language.
I remain undecided about this. The best way to know would be to be a dog myself. In the meanwhile, my current bias is that language has a crucial role (language just being a word to represent a deeper process) to play on how we experience consciousness. I also feel that it is not the whole of consciousness (as I did start to feel few years ago). Consciousness remains a mystery to me. I feel animals do probably have phenomenological experience but this would be quite different from how we experience consciousness.
The operant learning example does reflect on this view as well. As I said mice (and dogs) seem to be more at the mercy of operant learning than humans (I would have serious problems trying to train you to sit to a blue light). The reason for this, in my view, is that consciousness is able to work opposite to unconsciousness. The more conscious free will you have the harder it is for you to be programmed.
Yes, millisecond based studies reveal humans have biases and inclinations which are there even before they are consciously aware of them. Other studies also show that we have already begun to make decisions before we are consciously aware of them. That’s fine. But what is being discussed in these studies is how the unconsciousness works. Once something does enter into our consciousness we do have control over what we do with it.
I was also intrigued with this statement of yours. But I incline towards Mike’s statement that functional ability does not come from purely computational capacity or number of neurons. A lot would depend on how the neurons are structurally organised. If it was only about the number of neurons, many animals have more neurons than humans. Even if we consider neocortical neuron numbers (which are the latest to evolve) some dolphins have twice the number humans have.
My use of the word “operant” was meant to refer to non-reflexive learning. (I do remain unclear on the distinction between reflexive operant learning, which I’ll admit is a thing in the research literature, and classical conditioning.)
I mostly agree with Fizan in this discussion. A dog can learn to respond a certain way to the sound of the word “sit”, but that doesn’t mean its association of the word with the action operates at the level of symbolic interpretation. It it did, then we should be able to issue commands in new simple combinations and they’d be able to interpret the combination, but that doesn’t happen (except in movies).
Whether the dog is conscious of its operant learning depends, I think, on how we define consciousness. Non-reflexive learning would require imaginative simulations, but in humans it appears those simulations can happen outside of our introspective perception, which means it can happen in dogs even if they don’t possess the ability to introspect.
Does consciousness require introspection? In humans, if we can’t introspect it, then we usually consider it to be outside of consciousness. That said, an argument could be made that consciousness lies more in the experiences normally reachable by introspection rather than in the introspection itself.
Eric, you ask what evolutionary sense would it make for animals to have experiences but not have access to them. My question is, can you identify abilities that require that access? If you can, then we can observe whether animals have those abilities and gain some insight into whether they have that access. That’s why I’ve focused on the studies looking for metacognition in animals. Their results are pretty stark, showing it only in primates and possibly dolphins.
We have to be very careful not to project our own experience on animals. The human cerebral cortex has 16 billion neurons, the dog’s about 530 million. It shouldn’t be controversial that a dog’s experience will be substantially less developed than a human’s. They will be missing something we experience. That’s why a principle in animal research is not to favor a higher order cognitive explanation for behavior when a simpler one is available. (Of course, we’ll be missing some of a dog’s experience, such as the detailed olfactory maps they make of their environment. But in general we should expect our cognitive perceptions to be deeper and broader.)
I agree with you that other animals are unable to use symbols in the way that we do, and I presume because other animals haven’t evolved to use languages. Still I do consider us to acquire our useful symbols in the same way that they acquire their useful symbols — through conscious experience.
I’m impressed that you’re able to admit that consciousness remains a mystery to you. My advice is to just keep doing what you’ve been doing, or balancing an open mind against a skeptical outlook. If nothing ever makes sense to you, then go ahead and remain agnostic. Of course science isn’t yet able to claim effective answers here, so it’s not like there’s any real party happening without you.
I believe that I’ve developed some extremely useful mental models, though teaching others how they function has been challenging. If you end up understanding how they work pretty well, then I’ll be extremely interested in your assessments of them.
As I define consciousness, human or otherwise, introspection can be an element of it, though it’s certainly not required. The way that we can assess whether this definition happens to be useful is to explore alternatives. Would we want so say that something which is suffering horrible pain, but can’t introspect it, has no consciousness in this regard? If you decide this to be a useful definition then I’ll accept it while considering how your models function, as well as hope for reciprocation. Regardless, yes I do find it useful to define consciousness by means of “experiences normally reachable by introspection rather than in the introspection itself”.
You ask if I can identify any animal abilities that require these subjects to have access to their experiences? Well since my theory is that conscious abilities in general require access to conscious experiences, I suppose that I can. (By the way, I’m interpreting “access” here as an ability to experience and/or assess conscious inputs. Let me know if you have something else in mind for the term.)
A parent could do things which promote offspring survival automatically, and/or it could do such things consciously. To go the conscious route however my theory is that there must be a punishment / reward motivation associated with accessed experiences. Let’s say that a given parent has no empathy or sympathy for its offspring, but evolution has also programmed it to enjoy providing offspring nutrition. Well even this would be an experience (the enjoyment part, that is) which would need to be accessed in order to motivate associated conscious behavior.
You may recall me telling you about my parents’ presumably jealous dog (in an argument against Lisa Feldman Barrett’s theory of constructed emotion)? She’s a sweet old lap dog that I think would never dream of being physically aggressive with my young nephew. But I presume that she had access to feelings of betrayal associated with the attention that my parents would give this child, and so she covertly took a doll that he brought over and ripped it to shreds for them to find in their bedroom.
We could also consider the behavior of trained police dogs. They should need access to their experiences in order to display their various conscious talents. So I predict that removing such access, perhaps by means of certain drugs, would also remove these conscious abilities.
Mike unless you state it plainly, I will not believe that you believe that dogs have conscious experiences, and yet no access to them. To me this just wouldn’t make sense. What I suspect is going on here is that you’ve observed that lots of people seem to believe that their pets feel essentially the same things that humans feel, and so you may have gone too far in the other direction. It could be that in a very basic capacity much of what we feel was needed in conscious life long before there were humans, and that these feelings simply do not require the number of neurons that you’ve been imagining they do. Even though we currently term feelings such as jealousy to be “higher order”, at least note that humans probably didn’t seem all that special over the vast majority of their evolution.
One thing that I think we should keep in mind about soft sciences, is that professionals in them should naturally be biased to believe that their theories are “harder” than they happen to be. Regardless I believe that my perspective on animals generally corresponds with the perspective of professionals who actually work with them. Furthermore it still seems to me that what scientists today are calling “metacognition”, is simply an arbitrarily greater level of standard cognition. Their tests certainly do not measure whether or not animals are thinking about the concept of thought (not that I consider this useful for much more than human academics anyway). That there are animals like the human which have far greater cognitive abilities doesn’t suggest a profound difference to me.
I’ll also remind you of the study that you once led me to that Lisa Feldman Barratt cited in her book. The experimenters decided that dogs can’t feel guilt because they look and act the same when their masters scold them, independently of whether or not their behavior suggests that there is reason to feel guilty. Did it really not occur to them that wrongly punishing dogs could cause them to look and act similarly to subjects that actually are guilty of such behavior? Perhaps so and perhaps not, though far better work than this will be required in order for our soft sciences to harden up.
Anyway Mike, I think that I’m finally starting to get what has troubled you about the models that I’ve developed. Perhaps modern science has given you the impression that there is something inordinately strange about the human. I conversely present parsimonious models which suggests that we are a simple continuum of the whole system. My position is that beyond our much larger cognitive capacity in general, we’ve been transformed by four amazing revolutions — oral language, specialization, written language, and hard science. Hopefully we’ll end up reconciling our positions for the betterment of us both!
“Mike unless you state it plainly, I will not believe that you believe that dogs have conscious experiences, and yet no access to them.”
I think there are at least three possibilities:
1. Dogs aren’t conscious. The illusionists are right that our introspection of experience is an illusion. Creatures without introspection don’t have that illusion, and since there is only the illusion, they aren’t conscious.
2. Dogs are conscious and have some limited form of access to their experience, or it’s in a form that our (possibly) primate centered tests can’t detect.
3. Dogs are conscious, but at a simpler level than humans, one that doesn’t include introspective access to their experience, but the experiences are still there.
My sense is that 1 is unlikely, mainly because the computational resources to construct the illusion of experience seems like it would be as much, actually probably more than, what is needed to simply construct experience. That said, the human version of these experiences in inextricably tangled up with our metacognitive access to those experiences, which could make the weaker sense of this position a matter of how we define “experience”. And it’s worth remembering that we tend to consider whatever we can’t introspect to be outside of human consciousness. (Note how I avoided the “un” word here 🙂 )
2 is possible, but it feels like question begging to me, a refusal to accept what the data is telling us. But I’ll fully admit we can’t rule it out.
That leaves 3. I won’t say I have certainty about this option, but it seems like the most plausible one to me. Incidentally, this is entirely compatible with F&M’s views of consciousness. They never claimed to be explaining metacognitive self awareness. And the fact that a dog has 1/32 the neurons in its cerebral cortex seems entirely compatible with this scenario.
Okay, I see that you’re using both a “consciousness” term for all that I consider conscious, or sentient beings, and then a special “introspection” form of consciousness as well (associated with what I at least consider to be an arbitrarily greater level of cognition that permits certain primates to pass certain tests). So then I suppose that in future discussions with you I could steer clear of this advanced form of consciousness, and then see if you have any questions or comments regarding my basic consciousness model itself. This is the one that states that the “brain” is made up of a large computer, as well as a tiny conscious one that functions through the first. Theoretically the function of the small computer constitutes what I know of existence, with input (senses, valence, and memory), processing (thought), and output (muscle operation).
(Thanks for avoiding the “unconscious” term, which I consider people to use in too many ways to be effective.)
I think there are layers to consciousness. In a lot of the literature, this is often referred to a first order consciousness, constructing concepts and images of the outside world, which dogs must have in order to generate the behavior they exhibit, and second or higher order consciousness, which involves constructing concepts of the concepts, what I call metacognition.
Another way to look at it is the 5 layers I’ve discussed before.
2. perception, building concepts of the environment to increase the scope of what the reflexes react to
3. attention, prioritizing what from 2 the reflexes react to
4. imagination, simulating scenarios to see how the reflexes react, and inhibiting or allowing reflexive reactions of 2-3 based on that
5. metacognition, building concepts of the concepts recursively for more sophisticated behavior, such as symbolic thought
I think dogs have 1-4, but can’t see any evidence for 5. Human level consciousness requires 5.
Honestly, sometimes I think the same thing about “consciousness” that you think about “unconscious”, that it’s a word with too many varied meanings. I’ve noticed that a lot of neurobiologists avoid it altogether in favor of more precise terminology.
I really like where you’re going with the thought that consciousness may be a word with too many varied meanings. When I get my thoughts together on it I’ll shoot you an email, since we’ve taken this one way off topic.
Consciousness does remain a mystery to me. It’s not from a lacking of trying to understand it. My search for that understanding is no way near it’s conclusion and it seems to grow with time. However, so far my direction has been going from a mechanistic/ neuroscience/ materialist explanation to a more psychoanalytic idea to a more agnostic acceptance of some limitations. This is as my world view as a whole seems to be progressing. I feel going into psychiatry and then getting acquainted with a certain brilliant psychiatrist made this change. I’ve spent over a year arguing against his ideas with him and vice versa. I came to accept something of where he was coming from. I haven’t given up though.
I feel we eventually come to accept things by redefining them so they can fit into a larger picture, this reduces our cognitive dissonance. Often we progress by giving up on things without even realizing it. For example as I heard Chomsky saying, no one is really a materialist anymore. There is no materialist/ mechanistic world view anymore yet we are still using the term as it means something. This change happened when we came to accept forces at a distance like gravity, there is nothing material about such magical actions. But we came to accept them as a property inherent to matter. (I don’t want to go off in a tangent a start a whole other topic here, we could leave it for a further more relevant post).
I suppose that it will be difficult for me to compete with a brilliant psychologist who has been interested enough to have discussions with you for over a year. But will this perspective of “limitations” be enough to satisfy your inner curiosity in the end? Perhaps not.
Of course you might well have unintentionally misrepresented him. As a scientist who is paid to explore human nature, is he happy with the contradiction of “agnostic acceptance” paired with “some limitations”? Scientists aren’t paid to figure everything out right now, though they are paid to piece together whatever they can where ever they can, on the hope that effective understandings may in time be developed regarding their explorations given causal dynamics. The only valid reason to propose fundamental limitations that I can think of, would be to begin with a premise of dualsim.
I wonder if he’d like to respond privately? If so then I can always be reached here: thephilosophereric@gmail/com
(Wow, in this day and age Chompsky argues that there are no mechanisms associated with gravity? That would be an interesting post!)
My increasingly obsessive nature means I’ve got things accumulating which I want to write about but have been unable to.
(See my reply below to James it may be relevant to your comment as well.)
That Psychiatrist’s view is the ‘psychoanalytic idea’ stage in my progression above. I seem to have moved beyond that even (on my own) to the agnostic-acceptance bit. Even still, I may very well be misrepresenting his views here (I try my best not to), he may say even this process says something about symbolism (I’m describing my imaginary concept of his ideas).
He and I are both psychiatrists rather than psychologists. We try to treat people based on the ‘best available evidence’. You read psychiatric textbooks and a lot of the time they start with something like “we don’t fully understand how…”
The ‘limitations’ perspective is also rather newly formed. It makes more logical sense to assume we have limitations than saying we do not (at least to me). This is also partly inspired by Chomsky (and some others I forget). We are mammals. I presume (from my perspective) that if a tree was trying to make sense of it’s universe and using leaves to achieve this (don’t ask me how) it may eventually say “I don’t have enough leaves to understand somethings.” We may say we don’t have enough cognitive capacity. Whether that’s true or not is another thing. It may even not make sense. So that’s something we have to work with.
Then ‘my perspective’ is also a limitation I can’t become anyone else (I can only try to imagine it ‘from my perspective’).
On gravity, Chomsky does not say there are no mechanisms associated with it. Rather we change our definition (by lower our standards in a sense) of what ‘mechanism’ implies. So taking the classical (original) view of mechanistic processes and materialism, you couldn’t deduce gravity or reduce gravity down to something material (in the original sense). You had to accept another fundamental property of the universe that is ‘forces’. I would suggest watching Chomsky’s talk itself (again I don’t want to be misleading people):
Title: Noam Chomsky – “The machine, the ghost, and the limits of understanding“
Apparently I misunderstood what you were referring to with the term “limited” there. I was imagining that you were using it in an ontological rather than epistemological sense. I’ve noticed philosophers taking this sort of escape route most, as in “Don’t blame us — what we study doesn’t submit to effective answers anyway”.
I agree entirely regarding the limitations of human understanding. Actually if the field of philosophy were in a much better place today then I think that all students would be educated about this issue long before they become professionals. (On this occasion I’ll spare you and your readers of my two principles of epistemology, unless asked of course.)
Regarding Jacques Lacan that you brought up below with James Cross, please try to keep your wits about you regarding such longstanding fringe ideas. I’d counsel agnosticism until things make very good sense to you in this regard, if they ever do. Given how long our soft sciences have struggled, it’s surely the case that today there is a tremendous amount of unhelpful crap festering in the system. Ah, but which is which? Your community should progressively figure this out as your science hardens up.
I do get confused with the ontology epistemology divide. I guess you could say I am talking about how we come to know things but I am also saying that we ARE limited in the way we come to know things.
I did not get what you meant by philosophers taking an escape route and why they are being blamed for somethings. I would encourage you to share your principles on epistemology if you feel like up to it (we’re already in the flight of ideas territory 🙂 ).
Jacques Lacan is certainly not fringe. It is an established psychoanalytic school of thought practiced throughout the western world. It is rather more difficult to grip making it less attractive I suppose. But a lot does not depend on whether it is true or useful. No, what happens is these days we tend to prefer things which are cheaper, easier and seem to easily make sense. I don’t know if that process is the hardening up you are referring to. I probably don’t understand your concept of the science ‘hardening up’. You say it as if it is something destined to happen. The hard and soft sciences seem like a social (humanistic) hierarchy. Perhaps even a stigmatizing one. I would suggest reading this abstract:
https://www.journals.uchicago.edu/doi/pdfplus/10.1086/227835 (The Hierarchy of Sciences)
It seems to me that you weren’t talking about what ultimately exists, but rather human conceptions of what ultimately exists, or epistemology. That works for me. Ontology would be what actually exists in the end. Physics is epistemological for example, while metaphysics is ontological.
The ontological escape route that I was referring to in philosophy goes about like this: Some people (including certain prominent scientists) are extremely dismissive of philosophy in general given that nothing ever seems to get resolved in the field. In defense some philosophers argue that the ontological nature of what they explore itself isn’t suited for collective agreement, thus apparently absolving themselves of blame for not developing any agreed upon understandings.
I’m unhappy with both philosophers for employing such a cheap ontological means of defense, as well as of critics for pushing them this way. Why not try to be constructive? I consider philosophy amazingly important, and suspect that its inability to develop a respectable community that provides humanity with various accepted understandings, to help keep our soft sciences soft. (By “soft” I mean a void in agreed upon professional understandings. That there are so many schools of thought regarding the fundamentals to our function, demonstrates a yes stigmatizing softness to your field.)
Thanks for asking about my epistemology! Actually I have four principles of philosophy that I hope will become generally accepted some day.
My single principle of metaphysics isn’t quite that all of reality functions causally, but rather that to the extent that reality doesn’t function this way, there aren’t things to figure out anyway. If accepted this would effectively put the dualists and such in a club that resides outside the scientific community.
My first principle of epistemology is that there are no true or false definitions, but rather only more and less useful ones in the context of a given argument. So if someone presents a definition, this principle demands that it be accepted in order to asses that person’s ideas. Conversely today we seem to think about “consciousness”, “time”, and so on as if they exist to be discovered rather than defined. I suspect this to be the greatest structural flaw in science today.
My second principle of epistemology is essentially the scientific process itself, though displayed universally regarding conscious function. It states that there is only one process by which anything conscious, consciously figures anything out. It takes what it thinks it knows (evidence) and uses this to assess what it’s not so sure about (a model). As evidence continues to remain consistent with a given model, it tends to become progressively more believed.
Then finally there is my single principle of value. It’s that it’s possible for a non-conscious form of computer to produce punishing to rewarding experiences from which to drive the function of a conscious form of computer. I consider this “valence” stuff to be all that’s valuable to anything throughout all of existence.
Once philosophy develops a respectable community which has its own generally accepted principles, whether mine or others, I believe that scientists will be able to do their jobs far more effectively than they’re able to today, and certainly on the soft side.
If I say we are limited in the way we come to know things, would that be an ontological statement? I am claiming that is how things ultimately are.
I have some acquaintance with the science vs philosophy dispute you are referring to. My perspective (so far) is that they are concerned with different things which only superficially appear to be similar in many cases. Science is more concerned with the usefulness of things and is equally unable to explain phenomenon. Science is also a way for us to reduce our dissonance by deferring uncertainties to a higher yet unknown explanation. Science achieves this by being more pragmatic whilst at the same time giving up on previously set standards. Philosophy sees schools of thought and scientific thought generally falls within a few of those schools of thought e.g. pragmatism, naturalism, empiricism etc. As other schools of thought are in conflict with each other so is the scientific school of thought in conflict with them and vice versa. The scientist is yet another type of philosopher who doesn’t agree with the rest.
The differences don’t stop there even within science various scientist don’t agree on many things. Similarly, within other philsophical schools of thought philosophers tend to disagree on things as well. See what we are doing right now, and also look at other people who we’ve both had discussions with (like Mike), we haven’t been able to reach agreements (unless perhaps we were to compromise on somethings).
I don’t know if you were able to look at the article I previously shared but the researcher’s conclusion was that in all sciences there are similar levels of disagreement at the research frontier and consensus is usually maintained through sociological processes and reward systems.
Thanks for sharing your principles of philosophy. Would you say these are your apriori beliefs?
On your principle of metaphysics, my critique is that you are assuming that the scientific community is a homogenous group. You probably desire them to be more homogenous. You may end up with a monist scientific community and dualist scientific community each considering the other outside the “scientific community”. Even within these communities, there would likely still exist some disparity.
I mostly agree with your first principle of epistemology. We are left with defining ‘useful’ in a circular manner.
I’m not too sure about your second principle. It seems limited to consciously built models and beliefs. I don’t think that is all there is to the acquisition of knowledge. Unconcious processes like innate knowledge and biases (to name a few) also seem to be important in how we come to acquire knowledge.
I was unable to grasp your principle of value at this point.
So you’re claiming that humans are limited in the way they come to know things, not simply as an effective model of reality, but indeed as something that’s ultimately true? Well yes, I suppose that I could then call that to be an ontological rather than simply epistemological statement.
I think I understand your assessment of science versus philosophy, and to me it does seem like an earnest observation of how these separate disciplines function in practice. Beyond practice however I go a bit more idealistic with this sort of thing. Given my single principle of metaphysics, I presume that all things function causality (since exploring non-causal function would be hopeless anyway, not that it couldn’t be the case). Well if all things do function causally, then all of reality study should ultimately address one interconnected entity in the end. Thus not only should those who study the nature of reality need to consider all areas of science as a unified entity, but this should include what’s in philosophy’s metaphysics, epistemology, and axiology (or value branch). In order to help scientists better assemble this puzzle, we shouldn’t permit dynamics like these to be left off the table.
Here I need to be careful to not come off as some asshole “scientismist”, as they’re sometimes called. These are people who are very dismissive of philosophy’s two and a half millennia of (western) contribution to human culture. I, conversely, consider this scholarship to be a tremendous treasure, or something to both preserve as well as expand. But additionally I believe that a second such respectable community is needed, or one that sees itself as promoting our continuum of reality exploration. In order to become established I believe that it will need at least a few common principles of agreement (such as my own four.) The “proof” of them should come to the extent that standard physicists, chemists, neuroscientists, sociologists, and so on, also come to find such principles productive tools from which to promote their own explorations of reality. This second community of “philosopher” (and I don’t care what name it ultimately takes), would thus join scientists in their ultimately singular quest.
Yes I did earnestly look over the abstract that you’ve provided above, as well as attempted to purchase the full paper. For some reason my credit card number wouldn’t quite go through. Anyway I suppose that you could assess whether or not I at least understand the position of that paper.
I consider the top of the hierarchy to be where sciences are far more empirically verified, such as physics, while not so much at the bottom, such as sociology. Furthermore even at the frontier in subjects like physics there should be disagreement, given that it’s the frontier. Still there seems to be plenty of things in hard science which are quite uncontroversial for people educated in the field, though not so much for soft science. (Another way to assess where a given field lies in this hierarchy would be to consider what introductory text books provide. If they mainly provide definitions and answers then they should be more on the hard side. Conversely if definitions and questions are mainly provided, then they should be more on the softer side of science.)
I presume that this paper presents data to challenge this hierarchy, and on the grounds that hard science isn’t quite as hard as it seems, and soft science isn’t quite as soft as it seems? Well the fruits of hard science are quite apparent today given recent advancement in human power, while the fruits of soft science are nothing of the sort.
The other apologist argument would be the one that I see most, or that soft science softness is fundamental rather than epistemological. Well perhaps so, though to me this seems like not only a self fulfilling prophesy, but quite possibly erroneous. My own position is not that our soft sciences are incapable of become harder, but rather that they lack solid founding principles from which to build. In these efforts I provide my own such theory from which to potentially found these fields.
Note that any differences in opinions between people like you, Mike, and I, should be expected given that our interests lie at the frontier of human understandings. There’s nothing wrong with that! It’s not exactly that I’ve noticed full disagreement between Mike and I, but rather that he hasn’t quite grasped the nature of how my sometimes radical models function as a fully continuous machine. We’re still working that out. He should pretty much have it once he’s able to predict how I would address a wide range of dilemmas. At that point he should be able to provide me with even greater assessments than he already does. Similarly I’d hope to teach you the nature of my models as well to gain your own educated assessments.
I hadn’t previously classified my four principles of philosophy as a priori or a posteriori, so thanks for having me to think about this. Upon reflection my single principle of metaphysics is a priori, as well as my first principle of epistemology, given that they’re each true by definition. But then my second principle of epistemology and my single principle of value are observation based positions, and so reflect a posteriori beliefs.
In your critique of my single principle of metaphysics you’ve stated that we might end up with both monastic and dualistic forms of science. Well actually, this is exactly what I seek. Today we instead have a mishmash of the two. It’s similar to how we have both humanistic and non-humanistic philosophers I think. Let’s separate the dualists from the monists so that they can each have separate clubs from which to work. (Of course the dualist scientists, as well as humanist philosophers, would squeal like hell about being divided up into separate classifications, given that they’d naturally be considered the lower forms. This parting would need to be handled with tremendous diplomacy at first I think, and then swiftly taken rather than asked for.)
On my first principle of epistemology, “useful” isn’t actually circular, but rather founded upon my single principle of value (which I’ll get to below).
On my second principle of epistemology, you’re entirely right that it’s limited to consciously developed models and beliefs. No claims about general belief are made by it. Beyond the unconscious dynamic that you’ve mentioned, consider also the opposite to my principle. Things cannot only be believed by means of its “reason”, but also by means of “faith”.
So then what do I mean by “value” as something which is possible for a computer which is not conscious, to produce for a conscious form of computer to experience? Well I’m talking about the neuron based computer that’s in our heads. I consider this entire thing to be no more conscious than what I’m typing on right now. It takes inputs, processes them algorithmically, and then provides associated outputs. But apparently this computer is so advanced that one element of its output is to produce an entirely separate variety of computer, or the conscious one by which we experience existence. This second computer contains no neurons or any other “hardware”. That should all be part of the non-conscious mechanism which creates consciousness. The conscious computer (as I define the term) exists phenomenally, while the non-conscious computer is standard.
I theorize conscious function to harbor three varieties of input, one variety of processor, and one variety of (pure) output. It’s defining form of input is “valence”, or the stuff that I theorize to be all that’s valuable to anything throughout all of existence. I consider this to be the punishment and reward which drives conscious computational function. (Electricity drives the function of the computers that we build, while all sorts of chemical dynamics drive the function of the non-conscious computer in our heads.) My single principle of value exists as the motivation which drives the punishment/ reward dynamic to conscious function.
Thanks for that thoughtful reply Eric.
I hope you don’t mind me continuing to question some of your thoughts.
You presume that all things function casually and exploring non-casual function would be hopeless. The rest of your statement follow on from this. Before digging further I want to clarify does this causality have to be along the arrow of time (past to present) or can it also be retro-causal (i.e. future to past) as well?
The real question is around that presumption. I have a few thoughts here. I presume a dualist loosely may say something like it is unkownable how the spirit affects the material (and vice versa) but we have evidence that both exist. A material monist would likely say it is knowable how this happens because the spirit can also only be material. In both these statements I find a similar problem. They are both a statement of how reality really is like (?ontological). For the same reason I don’t find both views scientific either. Whether you say it is unknowable or knowable you are overlooking and rejecting the truth of the matter which is that it is “unknown” and we don’t know whether it is knowable or unknowable.
Is everything an interconnected entity? We don’t have evidence to support that claim. Despite a lack of evidence should we move forward with a presumption that that is the case. Under that presumption we are effectively trying to prove our presumption true which is fruitless since we had accepted it to begin with.
But that isn’t what philosophers as a whole do or philosophy as a whole does or can do. The reason being there already are such philosophers who do that and philosophy isn’t a subject focused on one frame of thought. It encourages and encompasses diverse thinkers. In a sense it’s a useless term because everyone is a philosopher in some sense. You seem to want everyone to become more homogenous by accepting certain presumptions to be true (when in my opinion there is no evidence or reason for them to be true).
Coming onto our next discussion on Hard vs Soft science. I can see how this divide fits into your narrative where everything has to be causally and materially figured out. You make two important distinctions first that in hard sciences things are more empirically verified and second in hard sciences there are plenty of uncontroversial things as opposed to soft sciences.
Looking at the first distinction, I argue that there are many things that are empirically verified in the so called soft sciences as well. As an example people’s behaviour changes when someone is observing them as opposed to not (Hawthorne effect). (Or you can take classical or operant conditioning or Gestalt laws etc.) Conversely lets take something as basic as mass and gravity in physics. We have empirical evidence from fairly simple observations that somethings wrong, the mass of every galaxy is more than 3 times what it should be.
There also needs to be made a distinction between observation and explanation. Observations are usually straightforward, they can be repeatable etc. The problems are usually with the explanations given for the observations. Explanations tend to be colored with the presumptions. Although one might argue in the so called hard sciences observations tend to be more precise and repeatable this does not automatically give the same credibility to the explanations offered.
I also think we tend to let the usefulness of observations bias what we think is hard vs soft. For example the discovery of fire, the wheel, electricity, radio waves, radioactivity etc. These things have been crucially useful to us and it has nothing to do with the task of trying to bring everything under one umbrella (a theory of everything or unification etc.). The benefit is derived from the observations themselves and then figuring out how they could best be put to use for human needs.
Coming to the second one, in soft sciences there are also many uncontroversial things some examples I gave above. Here again observations tend to be more uncontroversial than explanations which are usually always controversial to some extent. Consciousness exists, thats an observation why it does OR what is consciousness is a controversial explanation. Similarly matter exists is an observation why it exists OR what is matter is a controversial explanation.
I agree that it may be useful to divide the monist and dualist scientists. But isn’t that already the case they are separate by definition. Or perhaps you mean like an ostracization in the public eye?
The ‘lower forms’ here is disturbing. According to whom? is the question. In my eye the monist and dualist are similar in that they have an overaching belief system without evidence of either. Yet they can both produce useful science as that does not require an overaching belief system to be present in the first place. Yet again both views might perhaps hinder scientific discovery if their observations conflict their assumptions and they try to salvage the later (which is something I tend to observe here).
What do you mean by
This is something that eludes my understanding.
And when you say the conscious computer does not have any hardware how does it have a processor? If that processor is also based on the non-conscious hardware then why include it in the conscious ‘computer’ ?
I’m very happy to continue explaining my positions!
Regarding time, I do have evidence that it moves forward though no evidence that it moves backwards. Just as a film in reverse displays events that seem illogical to me, time in reverse would be illogical to me as well. But then I wouldn’t expect to have any evidence of reverse time even if this does commonly occur. Perhaps it does. I like your “retro-causal” term in that regard. Retro-causality wouldn’t conflict with my own belief in causal determinism so I’ll remain agnostic here for the moment.
I like to think of time/space as one unified object (including any other dimensions of existence if applicable). Thus whatever has happened and will happen, exists as one perfectly interconnected four or more dimensional entity. Instead it’s common to think about existence as something which is momentary. If causality is fully complete, then all dimensions of existence should actually exist as one amazing single structure.
On “knowledge”, for the most part I don’t like to use this term in formal settings since it suggests infallible beliefs. I have only one belief about reality that I consider infallible. It’s that I personally exist, or “I think therefore I am”. (I love being able to attribute the unique element of reality that I know exists, to the great dualist, René Descartes.)
With the term “knowledge” I perceive you to be getting to determinant versus indeterminate elements of existence? Regardless I agree with you that both the dualist and monist make unscientific claims as you’ve presented them. Let’s see if I can restate the question such that we can all agree (which is to say, agree upon the question, not the answer).
As a monist I believe that all of reality functions causally. Thus when we record dynamics which some consider otherworldly spiritual evidence, I tend to look for causal dynamics of this world for explanation. Anyway the question that I believe we should all be able to agree upon is this: Are there “spirits” which affect our world from outside of it, or are we simply too ignorant about our own world to effectively explain certain causal circumstances that occur here?
I’m well aware of the dangers of circular reasoning, though my single principle of metaphysics is not circular. It merely states that to the extent that causality fails, things can’t be figured out anyway. Furthermore science thus becomes obsolete to the extent of this failure. If my principle were instead to state that causality never fails, then it would indeed become a trap for circularity. (Of course as I stated last time this principle is true by definition, though in an open ended way that makes no ontological claims in the end.)
I agree with you about the state of philosophy, and let me emphasize that I don’t mind it continuing on exactly as it always has. Still in addition to this I would like something else to be developed as well. I’ve framed reality as a vast jigsaw puzzle where each piece fits together by means of causality. Thus scientists can be considered people who try to epistemically reconstruct this puzzle in order to make tangible sense of how reality functions. But apparently there are certain elements of reality that human convention has placed outside of science, though inside of philosophy. Thus I’m saying that we’ll also need a respectable group of philosophers which is able to develop effective agreed upon principles of metaphysics, epistemology, and value, in order to contribute to the work that scientists in general are doing. Today professional philosophers aren’t effectively contributing given that they provide no generally accepted principles. But even a small group of philosophers that reaches some agreements would suffice, that is if scientists end up finding their principles useful for their own work.
I also believe that there are plenty of uncontroversial understandings in our softer sciences, as well as lots of controversial speculation in our harder sciences. My main concern is to help our softer sciences become harder given that they seem to have far more room to grow right now. (Still I believe that hard sciences could at least use some effective principles of metaphysics and epistemology to help them along.)
Yes I think you’re correct about that, and as it happens I’m trying to get this straightened out by means of my second principle of epistemology. Consider the following iteration where I substitute your “observation” and “explanation” terms where I usually use “evidence” and “theory”. I may decide that your terms are a bit more effective:
Where you state, “Explanations may be colored with the presumptions” you seem to be referring to biases. They’re even a problem in hard sciences since theorists are naturally rewarded for developing successful rather than unsuccessful theories. But biases seem far more problematic in softer sciences given implications that tend to be more personal. For example it should be helpful for scientists to understand if there are any significant divergences in cognitive abilities between different races of people, and if so, what those differences happen to be. A given scientist however may have personal reasons to either find or not find such diverging cognitive abilities. An excellent case may be made that our soft sciences remain soft given how “personal” standard considerations in them can be.
Regarding the obviously usefulness of our hard sciences, well there’s just so much more that they’ve effectively taught us than soft sciences so far. I do suspect that soft sciences will teach us plenty as they progressively harden up however.
Regarding my “squeal like hell” remark, that was undiplomatic. But if a small group of philosophers were to agree upon certain principles, such as my own four, and scientists in general were to find these principles useful for their own work, then I’d expect this small group of philosophers to grow larger. Eventually I’d expect both a traditional “humanistic” form of philosophy, as well as a “science” form that would continue growing its field of common understandings.
I’ve been hanging out with enough philosophy professors on the blogs over the past four plus years to believe that I understand how sensitive this topic happens to be. I think some would bitterly continue on as before given this split, some would play both sides, and a few would even go all the way over to the other side and eschew philosophy’s past. But the big change is that people in general would start to perceive philosophy as both a humanistic “art” to potentially appreciate, packed with two and half millennia of (western) content, as well as another field that has developed various agreed upon principles which scientists (and others) use to help them figure things out.
Furthermore if my single principle of metaphysics were to be among these accepted principles, I believe that this would naturally put the minority dualists in science at very much of a disadvantage. Today monist scientists do consider dualistic ideas, though my principle suggests that such “magical” explanations are a waste of their time, and even if true.
Finally you’ve asked about valence and my own models of computation. I’ll give this at least an abbreviated go to help you frame any further questions that you might have. I’m thrilled that you’re curious!
I segregate all of reality into mechanical function, or everything that existed before the emergence of “life” (as far as I can tell), and computational function that came once life emerged. As I define it, life brought computation given its genetic material. Here chemical substances become taken as inputs that are algorithmically processed by means of genetic material for output reactions. Of course evolution continually refined the function of genetic material in order for life to better survive and proliferate.
I theorize a second form of computer to have emerged once multicellular life evolved to accept sensory inputs that were processed for full body output. This is the neuron based computer that’s commonly known as a “brain”. Apparently these central processors incited the Cambrian explosion of life 541 million years ago.
As you know far better than I, this first computer functions by means of chemical dynamics. Then the second computer functions means of neurons. Furthermore I should mention that the technological computers which we humans build instead function by means of electricity. In a chronological sense these are actually the fourth variety, so let’s now get into the third “conscious” type of computer and what drives its function.
I believe that the neurological form of computer hit a wall that it couldn’t overcome by means of standard computation. In diverse environments apparently there are too many circumstances for normal “if this… then do that” programming to suffice. Thus I believe that a very special third form of computer evolved, and through the neuron based computer. This one isn’t compelled by means of chemistry (1), or neurons (2), or electricity (4). I consider this third conscious form of computer to instead function by means of a punishment/ reward dynamic that I’ve lately been calling “valence”. For this type of computer, unlike anything else, existence can be anywhere from horrible to wonderful. Apparently by means of such purpose driven existence evolution could let the associated personal entity figure things out that couldn’t effectively be programmed for.
Let’s try this a bit more basically. I believe that one product of neuron based computation, can be to create a punishment/ reward dynamic for something other than it to experience. Initially this would have been inconsequential to organism function and may have died out and emerged countless times. But apparently at least one time there were enough coinciding mutations to put this dynamic in charge of some manner of organism function. Thus if something felt good it had reason to do more of it, and if something felt bad it had reason to do less of it. I’m referring to the evolution of teleological function itself. I believe that that this evolved to become the conscious based computer by which each of us experience existence. This computer does not exist as the brain itself, but is rather a potential output of the brain.
Eric, firstly I’m impressed with the depth of thought you have put into your model. I feel I may be getting a sense of it now. I especially enjoy the way you present it in the evolutionary sense. Your principle of metaphysics does seem to give a coherence to your views as well.
The differences between our narrative have become more clear to me now. Let’s focus on them. Firstly, as you put it
Spirit here means consciousness. First we have to agree whether consciousness exists. I presume we both agree. A monist and dualist likely agree as well. Moving on, is consciousness part of the world or outside it? Both would agree its part of the ‘world’. Then comes the difference:
A monist includes only material things which work causally to be ‘the world’. A dualist includes material things + consciousness as ‘the world’. For a monist consciousness must be part of the material world and for a dualist it exists outside the material world.
Your principle of metaphysics gives some sharpness to this divide and tells me something of the values that separate a monist and dualist. A monist aspires for more coherence and a dualist aspires for more acceptance. But first we have to get a clearer picture of consciousness.
Let’s take your model for that purpose: everything seems to be making sense apart from this
After this it makes sense again when you say “Apparently by means of such purpose driven existence evolution could let the associated personal entity figure things out that couldn’t effectively be programmed for…”
So let’s focus on the first statement: ‘horrible’ and ‘wonderful’ are adjectives to describe phenomenal experience i.e. conscious experience. So what you are saying is that this computer is already capable of conscious experience. But that is the whole crux of the problem! How does this conscious experience (of wonderful and horrible) arise? In what world if not the material is this experience present? If it is present in the material world how does it form from the materials?
In essence the monist and the dualist are on the same page but prefer different stances. A monist insists there has to be a material and causal existence of the spirit, the dualist wants to accept that consciousness cannot be material and hence material causality fails here. They would both agree that if material causality fails things can’t be figured out by us. The dualist is already there with accepting that it has failed, the monist resists it by trying to conceive of ways to form a coherent picture. One thing to be clear about here is that even if we do end up accepting causality fails in this situations, it does not mean consciousness does not exist as part of the world, yes it probably can’t be figured out, but we know it exists nonetheless.
My position is slightly different from both the monist and dualist. To me both seem to be trying to reduce cognitive dissonance, and they let their presumptions guide their world view. The honest answer I can think of is consciousness exists and so far it is unknown how it forms or where it exists.
I’m very pleased with your response! You seem satisfied with my theory that the reason consciousness evolved at all, is because non-conscious computers can’t effectively be outfitted with sufficient programming, thus mandating an agent. Furthermore I presume that it’s possible for such a computer to fabricate punishing/ rewarding experiences for something other than it to experience, thus creating the necessary conscious entity. Why do I presume this? Because I take the metaphysical leap of supporting “reason” over “faith”. I realize that I could be wrong about this, which I believe puts us more or less in the same camp. I do not know how non-conscious computers create positive and negative experiences, and even if they do, suspect that human engineers will never quite get hold of this particular “how”.
(Did you know that most modern physicists today are dualists by means of their interpretation of Heisenberg’s Uncertainty Principle? Given that matter functions as neither particle nor wave in the end, though we measure particles in certain ways and waves in others, they attribute the uncertainty found in our measurements here as a void in causality itself! To this day the legacy of Einstein remains quite tarnished, given that he merely presumed such uncertainty to instead reflect a standard void in human understanding.)
I am a monist, but it seems to me that the monist/ dualist question is actually a difference that makes no difference, or at least not in one very important way. Whether punishment/ reward occurs naturally or supernaturally in the end, what we mainly seem to need are more effective models of our nature itself, or advancement in psychology, psychiatry, sociology, and so on. I believe that our paradigm of morality has hindered us here, or certainly my ability to help others understand the nature of the models that I’ve developed.
Whether natural or supernatural, I believe that there is a punishment/ reward dynamic to reality which constitutes all value for anything that exists. Let’s call this “valence”. Thus there are genetic forms of computer which create life by means of chemical algorithmic processing (1), as well as neuron based computers which do so by means of these specialized cells (2), as well as the technological computers that we build which function by means of electricity (4), and a conscious form of computer that functions by means of valence — existence can be anywhere from horrible to wonderful for it (3).
Given that good and bad existence does exist, assessing the goodness and badness of any element of reality should be a matter of simple arithmetic, or at least conceptually. Here we can identify one or more conscious entities over a given period of time, and the welfare of this defined subject will be represented by positive minus negative valence experienced by it over that period. Thus a policy which harms its score will be bad for it specifically, for example, with the opposite being good. I refer to this as Amoral (since it’s not about rightness and wrongness), Subjective (since a specific subject per time is mandated) Total Valence (since each unit is inherently valuable), or ASTV.
The main problem that people seem to have with my approach, is that good and bad in itself is not what they’re concerned about. Instead they seem to demand judgement, or must assign rightness and wrongness labels to human behavior. So strong has this paradigm been, that apparently mental and behavioral scientists have shied away from value speculation even beyond morality. But if the conscious form of computer happens to be driven by means of value, as I propose, and if scientists do not yet formally identify a value element to reality (presumably given our morality paradigm, though regardless, no such claim yet exists in science as far as I can tell), then consciousness exploration today should be quite hindered.
By beginning with a formal theory of value, I believe that I’ve been able to developed a very effective model of mental dynamics. I’d be happy to share this model with you and your readers here, or perhaps if you have some thoughts about consciousness yourself, I could do so in a future post? I’m up for whatever suits your purposes. Given that I’ve been developing these ideas since my late teens, I would much rather be effective than hasty.
“…my direction has been going from a mechanistic/ neuroscience/ materialist explanation to a more psychoanalytic idea…”
Wouldn’t psychoanalysis have more to do with the unconscious than the conscious?
Who is the brilliant psychiatrist anyway? I just found your site and haven’t found where you might have mentioned him/her.
Not Mark Solms by any chance?
Conscious is odd that we can think it is a mystery yet in reality it is all we have work with. I’m not saying that consciousness is all there is but that everything in the world is mediated through it.
Thanks for sharing your thoughts. This psychiatrist isn’t someone famous, that anyone could recognize so I haven’t mentioned his name (haven’t asked him either). But in our psychiatric hospital, everyone acknowledges that he is brilliant in one way or another. I like him because he’s been among few people who I could have really insightful discussions with that shape my perspective (same is true of some of the people I’ve come to know through creating this blog). I found his discourse very intriguing because it wasn’t like anything I had known about and it isn’t traditionally mainstream either, so few people seem to appreciate it. However, it is a narrative shaped by famous thinkers. Much of it gets labeled ‘post-modernism’ which is a useless label.
However, I can tell you that his ideas are pretty much based on the famous psycho-analyst Jacques Lacan. For a more contemporary flavor look up Slavoj Zizek.
Psychoanalysis isn’t solely concerned with the unconscious. It is a way to understand the human psyche. There are variations within psychoanalysis and different countries/ groups/ cultures/ therapists tend to favor one or another approach. Lacanian psychoanalysis is based on the work of Jacques Lacan, however, he considered himself to be a Freudian.
It is a long and complicated to summarise even the basics here. However, in essence, it tells me that we create narratives to make sense of things. And that this Lacanian narrative itself is the best narrative I can conceive of so far to make sense of it all.
Having said that it hasn’t said anything (as far as I am aware) about how or why we have conscious experience (the very basic phenomenological experience). But it does tell me something about how our ‘human’ conscious experience might be shaped. For example, the conscious experience of a psychotic person is different compared to a catatonic person, an obsessive person, a depressed person, and a potentially normal person etc. This is very simplistic and all these terms I’ve used are again just labels trying to approximate something we both have in our imagination.
Moving on it has a lot to say about how we come to understand things (anything).
I agree. I would add that perhaps that is the reason it is a mystery. A fundamental problem always remains when trying to explain how consciousness forms. It has existed from the start, it is the subjective-objective divide. In current formulations, we are moving towards an acceptance of conscious experience being a fundamentally existent property in the universe (not that I agree with such an acceptance). Perhaps what we are accepting is that it isn’t explainable through a reductionist approach. That is it can’t be reduced to another fundamental property of the universe.
We could say a feedback loop is a unit of consciousness, as in panpsychism (which is gaining some popularity). But that doesn’t really say anything about consciousness itself. Others are trying to convince themselves that consciousness isn’t a thing. It is a composite of a number of other things.
We could say when certain groups of specialized neurons communicate with certain other groups of specialized neurons but which then doesn’t lead to an action because another group of neurons inhibited this process – that is felt as an impulse to do something. So, this communication is the same as an impulse. So, the conscious experience of an impulse was actually a communication. Hence, consciousness is a (collection of) specific/ specialized communication/s.
The problem here is when we say ‘..that is felt as..’, felt by whom? Whomever, whatever or wherever this is – is where the conscious experience is happening NOT the communication we just described. Just because something makes you have a conscious experience (like feeling an impulse in the above case) does not make that something (be it communicating neurons or oranges) become the conscious experience itself.
I apologize for my huge digression.
You want to look at Mark Solms and Oliver Turnbill’s The Brain and the Inner World. Solms is a psychoanalyst. The book has a chapter on Consciousness and the Unconscious.
BTW no problem with the digression.
Solms traces the capability of consciousness to a small group of cells in the brain stem. Damage to these cells results in irreversible coma. The content of consciousness, I would think, involves much more and other parts of the brain. The cells in the brain stem are old in evolutionary terms which suggests that the capability of consciousness probably can be traced back to the earliest multi-cellular organisms which are believed to be worms.
I write about this and some related topics here.
I’ll have a look at it. Although it does sound familiar I probably watched a TEDx talk in which the topic was on similar themes, I forget who the presenter was.
It is very interesting and perhaps more importantly useful to make these discoveries. However, as I said above it still does not explain consciousness. We are looking at what may be required to have that experience. And we’ve probably narrowed it down (whilst increasing the uncertainty of our answer) by moving from ‘the brain is required’ to ‘that brainstem area is required’. That is useful (and perhaps may open up possibilities of better explanations). But this is exactly similar to the communication/impulse example I gave above. Here, this brain stem area seems like the on/off button for conscious experience to occur.