Post Reply
Page 6 of 13  •  Prev 1 ... 4 5 6 7 8 ... 13 Next
Switch to Forum Live View Consciousness is an individual product of a biological organism?
2 years ago  ::  Jul 25, 2012 - 12:09AM #51
Mesothet
Posts: 119

Though a little hesitant, I’d like to re-jump-start this thread with some opinions and perspectives.


The potential stipulation that neurological data must be viewed as somehow proving the physicalist stance seems . . . well . . . not scientific. Unless one presumes that science in any way is founded upon absolutist convictions of reality.


A maybe more accurate assessment may be that the physicalist paradigm currently most subscribed to in the sciences serves its pragmatic function relatively well in addressing most data. The rest, as I presume all are aware of, will be confirmation biases toward this or that metaphysical outlook.


I have known of neuroscientists that not only were not quite ethical in their authority over others but who also engaged in studies that—for anyone who knows anything about biological evolution—were (and I presume still are) utterly nonsensical. More specifically, the neurological study of bird CNS’s in attempts to better comprehend what has been termed the human “language instinct”. Not only do birds and mammals not share any significant homologous evolution but they also do not hold any significant degree of analogous evolution. Yet, being considered a sexy field, funding was provided for such. Point being, it is quite possible that not all neuroscientists are extremely privy to what in the humanities is termed philosophical reasoning.


Data will be data however.


[Of course, there will be honest neuroscientists galore—but few will partake of asking big questions such as what consciousness truly is: most will work in specialized fields.]


As a relevant analogy to this topic of consciousness, were one to ask “What is true objectivity?” what may anyone honestly answer but either a) “no one knows” or b) immaturely provide a subjectively biased stipulation which, upon analysis, will ultimately be false? Nevertheless, does this in any way then imply that “there does not exist such a thing as true objectify”?


One may likewise assess the reality of consciousness. It may be made up of “voters” within the mind (David Hume’s commonwealth hypothesis of consciousness has been around for quite some time—though he himself was not fully satisfied with it), but there will yet remain some aspect of self which will perceive/interpret.


This aspect of self will not be limited to only humans or organisms with a CNS/brain.


As with true objectivity, all sapient persons will know of it as being real but none will know what the thing which perceives per se truly is.


As a perspective, using the Hindu concept of Maya (illusion) as premise of informational reality, one could affirm that on an ultimate level of reality only consciousnesses (that which perceives) are real and all else is illusion within which consciousnesses are enveloped and limited by. This, to me, seems to be a very simple—and very ancient—argument. I bring it up for the following two reasons: 1) it can in no way be falsified via data of any kind, and 2) it can hold full empirical integrity. Hence, this alternative perspective to that of physicalism can be fully in tune with such realities as neurological data, theory of biological evolution, etc.


One could say that such currently “alternative” metaphysical perspective will be—just as physicalism is—fully liable to its own confirmation biases. However, someone such as myself may contend that it will have far more explanatory power.


[I hope I haven’t placed my foot into my mouth with all this; but, if I’m in any way wrong, I’d like to learn how . . . ]

Quick Reply
Cancel
2 years ago  ::  Jul 25, 2012 - 6:29PM #52
Faustus5
Posts: 2,022

Jul 25, 2012 -- 12:09AM, Mesothet wrote:

The potential stipulation that neurological data must be viewed as somehow proving the physicalist stance seems . . . well . . . not scientific.


It's more like this: all of science proceeds under the reigning assumption that physicalism/materialism is true. So long as we are rewarded with a constant bounty of successful models within this paradigm, that initial assumption is deemed an accurate gauge of the ontology of the universe.


It could always turned out otherwise. For instance, critics of materialism in past centuries claimed that it would be impossible to synthesize organic matter from non-organic matter, and if that had turned out to be true, materialism would have been falsified.


Jul 25, 2012 -- 12:09AM, Mesothet wrote:

I have known of neuroscientists that not only were not quite ethical in their authority over others but who also engaged in studies that—for anyone who knows anything about biological evolution—were (and I presume still are) utterly nonsensical. More specifically, the neurological study of bird CNS’s in attempts to better comprehend what has been termed the human “language instinct”. Not only do birds and mammals not share any significant homologous evolution but they also do not hold any significant degree of analogous evolution. Yet, being considered a sexy field, funding was provided for such. Point being, it is quite possible that not all neuroscientists are extremely privy to what in the humanities is termed philosophical reasoning.


Actually, studying bird nervous systems for clues about language makes perfect sense, for these reasons:


1. Even though mammals and birds diverged from a common ancestor quite a long time ago, evolution operates on what is already there, and it could be that the origins of bird song in the neural architecture of their brains was built on the same ancient structures that human language evolved from.


2. Even if 1 were not the case--and it seems rather likely that it would be--convergent evolution happens, and it would be valuable to see if we could learn lessons from how bird song is structured in their brains that might be carried over in the brains of humans. We might be encouraged from what we found to look in places within the human brain that had been neglected previously.


3. Recent studies of parrots and other birds with the power to mimic has suggested that they might understand, in a limited way, how to use the words we thought they were just randomly aping. I want to know if that is true and what, in their brains, allows them to do so.


Jul 25, 2012 -- 12:09AM, Mesothet wrote:

[Of course, there will be honest neuroscientists galore—but few will partake of asking big questions such as what consciousness truly is: most will work in specialized fields.]


Well, in the last few decades many of them have become much more bold and have tried to piece things together that the specialists were concentrating on. That's how Baars put together the global neuronal workspace model.


Jul 25, 2012 -- 12:09AM, Mesothet wrote:

As a relevant analogy to this topic of consciousness, were one to ask “What is true objectivity?” what may anyone honestly answer but either a) “no one knows” or b) immaturely provide a subjectively biased stipulation which, upon analysis, will ultimately be false? Nevertheless, does this in any way then imply that “there does not exist such a thing as true objectify”?


I think we can leave philosophers to nit pick over such issues with the confidence that if they ever resolve them, that there would be scant consequence for the science of mind. Scientists are pretty good already at figuring out when a consensus is justified (i.e., the model or claim in question is "objective") and when one isn't.


Jul 25, 2012 -- 12:09AM, Mesothet wrote:

One may likewise assess the reality of consciousness. It may be made up of “voters” within the mind (David Hume’s commonwealth hypothesis of consciousness has been around for quite some time—though he himself was not fully satisfied with it), but there will yet remain some aspect of self which will perceive/interpret.


And that, too, is done by coalitions of networks of neurons. There is no self in there guiding this. The self is what emerges from these "votes", after the fact.

Quick Reply
Cancel
2 years ago  ::  Jul 25, 2012 - 8:50PM #53
farragut
Posts: 3,944

" Recent studies of parrots and other birds with the power to mimic has suggested that they might understand, in a limited way, how to use the words we thought they were just randomly aping. I want to know if that is true and what, in their brains, allows them to do so. "


 


I am quite persuaded that the African Grey of the family of the late Charles Fiterman pretty much proved the case.

Quick Reply
Cancel
2 years ago  ::  Jul 26, 2012 - 12:27AM #54
Mesothet
Posts: 119

Jul 25, 2012 -- 8:50PM, farragut wrote:


" Recent studies of parrots and other birds with the power to mimic has suggested that they might understand, in a limited way, how to use the words we thought they were just randomly aping. I want to know if that is true and what, in their brains, allows them to do so. "


 


I am quite persuaded that the African Grey of the family of the late Charles Fiterman pretty much proved the case.




Ah . . . the Grey parrot: my favorite in the genus. Ethological research of many wild bird-species in their natural habitat seems to typically indicate bird intelligence as well: the Corvus genius as just one example I know a little of that now comes to mind.


:)  Never claimed that all bird-species were relatively unintelligent.

Quick Reply
Cancel
2 years ago  ::  Jul 26, 2012 - 12:34AM #55
Mesothet
Posts: 119

Faustus5, 


So that I don’t unnecessarily jump to erroneous presumptions, I’d like to know whether you take consciousness to be strictly limited to those organisms which are endowed with a CNS.


I, at least for now, take consciousness—non-anthropocentrically contemplated—to denote the act of perceiving anything other than that which is doing the perceiving (I take this denotation of consciousness to hold valid irrelevant to metaphysical/ontological stance as to what such “perceiver” might in truth be).


I grant this is a very generalized understanding of consciousness; however, I view the act of perception as impossible without the concordant faculty of interpretation. For example, if an ameba is in any way presumed able to perceive then it will also be required to interpret—to try to keep this simple—external stimuli: at minimum, this interpretation of external stimuli will result in the given organism’s subjective meaning of it as something worthy of either positive or negative valency (i.e., in either attraction toward or avoidance of given perceived stimuli). Many an organism devoid of a CNS will typically be assessed as capable of some form of perception—I myself will contend this of all organisms as well as cells pertaining to multicellular organisms. The assessment that perception requires interpretation of information will, then, result in the assessment that perception will be equivalent to some at least minimal degree of consciousness.


If there are disagreements, please outline the reasons for such. In this case, also please describe which organisms, in your opinion, will have such a thing as consciousness and which will not.


This clarification, I believe, may help in better common understanding.


------------------


It's more like this: all of science proceeds under the reigning assumption that physicalism/materialism is true.


To me this overlooks a significant portion of empirical scientist who, aside from valuing the objectivity of empirical evidence, are also theists of one form or another. I’m not familiar with statistics as to such populace of theistic (as compared with atheistic) scientists in all branches of empirical sciences—but I am familiar with the reality that such do at least exist in the cognitive sciences. To my best current understanding, most theists are not physicalists by default—and science seems to instead proceed under the optimal intent of scientists to best uncover that which is ultimately true: regardless of ontological assumptions of reality.


The following seems to be tangential to the topic of consciousness. However, so that I don’t appear overly ignorant of the topic matter:



1. Even though mammals and birds diverged from a common ancestor quite a long time ago, evolution operates on what is already there, and it could be that the origins of bird song in the neural architecture of their brains was built on the same ancient structures that human language evolved from.



 Where this to be a genetics issue of which genes influence which cognitive behaviors, I might agree. For a shortcut explanation, dinosaurs and mammals that both coexisted way back when  were drastically different in both physical phenotype and behavioral phenotype. Birds which evolved from dinosaurs and placental mammals --> primates which evolved from small mammals that survived dinosaurs have gone through drastically different evolution. The likelihood that bird and human “instinct for grammar” (which great apes do not share) hold a common ancestry seems to be rather improbable, if that much.



2. Even if 1 were not the case--and it seems rather likely that it would be--convergent evolution happens, and it would be valuable to see if we could learn lessons from how bird song is structured in their brains that might be carried over in the brains of humans. We might be encouraged from what we found to look in places within the human brain that had been neglected previously.



 By this argument of convergent evolution, one would also be encouraged to spend the valuable recourses of both time and funding in studying squid eyesight so as to discover new insights as concerns the occipital lobes, etc., in vertebrates--but I fail to see the reason for such research. After some time (only as a measly Research Tech) in a neuroscience lab working on finch brains, I’m fairly certain that--aside from a bilateralization of  CNS hemispheres—there is no parallel between aves and mammalian brain structure (e.g. lobes, Wernicke & Broca, etc.)  


#3, on the other hand, does not deal with the issue initially addressed.


Again, I’ve replied with my best opinions to not seem as though I was merely pulling stuff out of nowhere. But, all this may be relatively beside the point of consciousness per se. For now, maybe we can agree to disagree on the benefits/detriments of aves-species research for the specific purpose of understanding Homo Sapiens’ capacity of instinctive grammar during the linguistic critical period of development.


Scientists are pretty good already at figuring out when a consensus is justified (i.e., the model or claim in question is "objective") and when one isn't.


Then, between us non-professional scientists, do we at least agree that “objective” signifies “impartial”: which, in a nonhyperbolic sense, implies ever greater reduction of subjectivity (aka. bias)?


And that, too, is done by coalitions of networks of neurons. There is no self in there guiding this. The self is what emerges from these "votes", after the fact.


Aside from “self” being a very relative term to that which may be intended by a multitude of discordant perspectives, it first seems beneficial to better understand your position: is this to you best data-supported opinion or blatant, incontrovertible fact?


At least for now, you seem to have overlooked the alternative model of reality proposed at the end of post #51. This is, after all, the alternative I’d like to uphold as being at least a contending possibility to that of physicalism.

Quick Reply
Cancel
2 years ago  ::  Jul 28, 2012 - 11:19AM #56
Faustus5
Posts: 2,022

Jul 26, 2012 -- 12:34AM, Mesothet wrote:

So that I don’t unnecessarily jump to erroneous presumptions, I’d like to know whether you take consciousness to be strictly limited to those organisms which are endowed with a CNS.


Yes. Or the functional equivalent if you don't want to rule out aliens or artificial intelligence. The concept doesn't make much sense otherwise


Jul 26, 2012 -- 12:34AM, Mesothet wrote:

I, at least for now, take consciousness—non-anthropocentrically contemplated—to denote the act of perceiving anything other than that which is doing the perceiving (I take this denotation of consciousness to hold valid irrelevant to metaphysical/ontological stance as to what such “perceiver” might in truth be).


I think it is best to let perception just be perception and not have so wide a concept of consciousness that it ends up applying to amoebas.


You want the word to pick out something that makes us humans different from other animals. Start with beings you know are conscious--us. Then figure out what makes us conscious. Then see if those things are measurably present in other animals.


Jul 26, 2012 -- 12:34AM, Mesothet wrote:

If there are disagreements, please outline the reasons for such. In this case, also please describe which organisms, in your opinion, will have such a thing as consciousness and which will not.


Right now I am persuaded by people like Dan Dennett that only human beings who can speak and understand a language are conscious. I've found his arguments that human level consciousness is only made possible through the acquisition of language to be mildly convincing. Infants and animals have something important going on, but language is enormously important.


This is in part a consequence of the methodology I recommended earlier--start with beings you know are conscious, see what makes them so, and then only widen the circle when you have measurable criteria.


But--I'm a fence sitter on this one. Something seems wrong about making language the deal-maker, but I can't articulate what seems wrong and I can't rebut Dennett's arguments, though I'd like to.


Jul 26, 2012 -- 12:34AM, Mesothet wrote:

To me this overlooks a significant portion of empirical scientist who, aside from valuing the objectivity of empirical evidence, are also theists of one form or another.


Scientists who are theists are going to take physicalism or materialism as a methodology when they are doing their work. They will believe that in fact materialism in the end isn't true and that there are realms science cannot measure or detect. But they know--if they are good scientists--that when doing science, only natural explanations are allowed.


Jul 26, 2012 -- 12:34AM, Mesothet wrote:

The likelihood that bird and human “instinct for grammar” (which great apes do not share) hold a common ancestry seems to be rather improbable, if that much.


I disagree. Remember, they both started from a common ancestor and live on the same planet with similar selection pressures. Evolution can't turn just anything into a mechanism for verbal utterances and perceptions. It has to work with mechanisms already there doing something related to verbal utterances and perceptions. So it makes sense to see if there are commonalities between the neural architecture for bird song and the so-called language instinct. Evolutionary logic suggests there should be.


Jul 26, 2012 -- 12:34AM, Mesothet wrote:

By this argument of convergent evolution, one would also be encouraged to spend the valuable recourses of both time and funding in studying squid eyesight so as to discover new insights as concerns the occipital lobes, etc., in vertebrates--but I fail to see the reason for such research.


Biologists look into this sort of thing because they want to understand how convergent evolution happens.


Jul 26, 2012 -- 12:34AM, Mesothet wrote:

After some time (only as a measly Research Tech) in a neuroscience lab working on finch brains, I’m fairly certain that--aside from a bilateralization of  CNS hemispheres—there is no parallel between aves and mammalian brain structure (e.g. lobes, Wernicke & Broca, etc.)


Time will tell. It is still worthwhile figuring out how bird brains implement bird song even if we don't thereby learn something about ourselves.


Jul 26, 2012 -- 12:34AM, Mesothet wrote:

Then, between us non-professional scientists, do we at least agree that “objective” signifies “impartial”: which, in a nonhyperbolic sense, implies ever greater reduction of subjectivity (aka. bias)?


That sounds like a reasonable sound bite to me!


Jul 26, 2012 -- 12:34AM, Mesothet wrote:

Aside from “self” being a very relative term to that which may be intended by a multitude of discordant perspectives, it first seems beneficial to better understand your position: is this to you best data-supported opinion or blatant, incontrovertible fact?


I think that overwhelmingly, the best interpretation of the data is that the self is an abstraction which we spin out of concrete instances of behavior. A center of narrative gravity, if you will.


Jul 26, 2012 -- 12:34AM, Mesothet wrote:

At least for now, you seem to have overlooked the alternative model of reality proposed at the end of post #51. This is, after all, the alternative I’d like to uphold as being at least a contending possibility to that of physicalism.


Models have to be based on concrete, testable reality. The alternative you suggested didn't meet that standard as far as I could tell. That's why you don't actually see it "competing" in the sciences with physicalism.

Quick Reply
Cancel
2 years ago  ::  Jul 28, 2012 - 3:04PM #57
Mesothet
Posts: 119

So as to first take this out of the way: As for me, I will agree to disagree on our differences of reasoning concerning biological evolution & neuroscience research on this thread. I yet disagree on many a level. There’s one book I recall reading (long time ago), for example, which more or less stipulated that Homo Sapiens typically stay together in relationships only for aprox. 4 years because we shared convergent evolution with birds. (There are more confounding variables—and premature conclusions—to this than can be mentioned, IMO). I suppose this general topic in itself would make for a good conversation; I, however, am quite interested to continue the topic of consciousness.


Yes. Or the functional equivalent if you don't want to rule out aliens or artificial intelligence. The concept doesn't make much sense otherwise


Myself, I’m more than doubtful that strong AI can ever be accomplished: At core issue to me is the apparent impossibility that trust (for oneself, for other(s), for context, for that which is (e.g., ontologically) real, etc.) can ever be programmed. I.e, the arriving at and/or maintenance of cognitive certainty within a realm of objective uncertainty (cf. academic skepticism) seems a) intrinsic to the reality of any consciousness [to me, even when contemplated at the level of an ameba*] and b) arguably indicative of some degree of ontologically valid freewill (between both intrinsic and extrinsic alternatives) which, if true, can only be best approximated via goal-driven chaos algorithms—but never fully actualized. Even from this vantage, however, this is not to in any way deny the value of cybernetics.


* Obviously, amebas are to be assessed as endowed with far more apriori behaviors than a species such as humankind. Yet, even here, some more recent research has provided some evidence that amebas might be able to engage in some degree of learning [e.g. pre.aps.org/abstract/PRE/v80/i2/e021926]: if so, this would require some degree of what can only be termed forethought: to learn one needs to somehow choose between perceived alternatives in a manner that takes into account (probabilities of) future benefit to self. Needless to say, this would be—at least via current paradigms—unintelligible from a typical physicalist stance; for the mechanisms that would allow any degree of learning in a unicellular organism (I would add, perception of extrinsic stimuli as well) are anything but currently apparent.


You want the word to pick out something that makes us humans different from other animals.


I can well appreciate the importance of dealing with human consciousness above all others. Nevertheless, from where I stand, if biological evolution is to be taken into account—and, I fully believe that it should—there will then either be a cline of progression in degree and/or quality of consciousness leading up to humans or, alternatively, one will presume that a quite miraculous quantum leap has occurred somewhere in the evolutionary history of sentience—an unaccountable transformation from automata to (relatively speaking) autonomous organisms (e.g. the human species).


This latter view, to me, has always seemed to be only a more deeply buried form of Cartesian Dualism . . . all the more odd because it assumes the monistic label of physicalism.


But--I'm a fence sitter on this one. Something seems wrong about making language the deal-maker, but I can't articulate what seems wrong and I can't rebut Dennett's arguments, though I'd like to.


Familiar with this argument (thought I have not read most of Dennett’s publications and arguments—but have browsed synopses online), I currently can only try to present the issue in the form of a choice of belief:


I currently presume that this cannot be empirically demonstrated outside of common agreement on introspective analysis of subjective reality as it applies to humans. Personally, I at times hold meaningful ideas (etc.) which I find relatively impossible to address via language—hence, ineffable known concepts. To me, artistic expression will be one means of making such otherwise ineffable subjective realities conveyable to myself and others: e.g. poetry, painting, music, etc. Given this personal reality, I then ask myself weather such (what I’ll term in semi-agreement with Kantian philosophy) noumena of thought [e.g. knowing one knows something that is on “the tip of one’s tongue” yet in no way knowing what it may be in any phenomenal sense—hence something grasped that yet does not hold any phenomenal reality to one’s self] will be fully dependent upon phenomena (e.g. language) or, otherwise, whether language is merely like pre-structured boxes utilized to stuff such noumena of thought into so as to best attempt to convey info to oneself or another? The latter view, if upheld, will discredit the “consciousness is fully dependent upon language” perspective. Also noteworthy, many a lesser animal will convey quite meaningful information via non-verbal communication; hence begging the question of what “language” itself might then be construed to signify. For example, bees will often adequately convey direction and distance of pollen to other bees via body-language. Will this then be construed as a form of language between bees? Thought not the easiest topic to quickly resolve, I myself do uphold that language is only secondary to consciousness.


Scientists who are theists are going to take physicalism or materialism as a methodology when they are doing their work.


This may be right. If so, then such theists will primarily be of a dualistic reality of ontology. I myself have known of Hindus working in neuroscience (many science driven folks in India as well); in which case, they would maybe have likely been of a monistic ontology as previously articulated (what in the west has traditionally been termed one of philosophical idealism). There is nothing within such a latter worldview that would deny the reality of objective reality, however.


I think that overwhelmingly, the best interpretation of the data is that the self is an abstraction which we spin out of concrete instances of behavior. A center of narrative gravity, if you will.


:) Sounds quite Buddha-like. Though he did state that (in his opinion) the truth is neither that we have a self/soul nor that we don’t have a self/soul. The middle path, though most challenging, here being considered also most valid.


Maybe we could further the commonwealth theory of consciousness—that of at least human consciousness being comprised out of votes. I do agree with such a view of human consciousness, but find it to be missing a rather important component: that which a) perceives the votes via introspection and b) chooses amongst the votes, maybe most typically in times or relative uncertainty as to benefit and opportunity cost of going this way or that.


So as to stay constant to the metaphor, this missing variable I here address I do presume to itself be resultant of the mind’s voters: this as a unification of otherwise disparate voters, to which individual voters could be affixed and from which individual voters could dislodge.  Nevertheless, there will seem to always remain a unified “perceiver” which will also seem endowed with ability of choice—here, between intrinsic voters perceived by such perceiver as to best act/react toward any given stimuli. (Such interpretation of human consciousness, to me, does seem to be in line with psychoneurological studies of the human brain’s operations.)


[Needless to say, all this is just best current opinions]

Quick Reply
Cancel
2 years ago  ::  Jul 28, 2012 - 4:04PM #58
JCarlin
Posts: 6,401

Jul 28, 2012 -- 11:19AM, Faustus5 wrote:

Right now I am persuaded by people like Dan Dennett that only human beings who can speak and understand a language are conscious. I've found his arguments that human level consciousness is only made possible through the acquisition of language to be mildly convincing. Infants and animals have something important going on, but language is enormously important.


This is in part a consequence of the methodology I recommended earlier--start with beings you know are conscious, see what makes them so, and then only widen the circle when you have measurable criteria.


But--I'm a fence sitter on this one. Something seems wrong about making language the deal-maker, but I can't articulate what seems wrong and I can't rebut Dennett's arguments, though I'd like to.


I would suggest that the functional equivalent of language, that is the ability to meaningfully convey abstract concepts would expand consciousness to those animals that can meaningfully communicate these concepts to each other.  Canine body language is clearly understood by those who care to make the effort, and can convey abstract concepts like  "Lets play" or intricate status levels. 


Language is certainly the deal-maker that permits symbolic abstractions, but I am not sure consciousness in a social sense can be restricted to humans. 

J'Carlin
If the shoe doesn't fit, don't cram your foot in it and complain.
Quick Reply
Cancel
2 years ago  ::  Jul 29, 2012 - 7:39AM #59
Faustus5
Posts: 2,022

Jul 28, 2012 -- 3:04PM, Mesothet wrote:

Myself, I’m more than doubtful that strong AI can ever be accomplished: At core issue to me is the apparent impossibility that trust (for oneself, for other(s), for context, for that which is (e.g., ontologically) real, etc.) can ever be programmed.


I doubt that anyone in the field of AI is even remotely engaged by such issues.


Jul 28, 2012 -- 3:04PM, Mesothet wrote:

I.e, the arriving at and/or maintenance of cognitive certainty within a realm of objective uncertainty (cf. academic skepticism) seems a) intrinsic to the reality of any consciousness [to me, even when contemplated at the level of an ameba*] and b) arguably indicative of some degree of ontologically valid freewill (between both intrinsic and extrinsic alternatives) which, if true, can only be best approximated via goal-driven chaos algorithms—but never fully actualized. Even from this vantage, however, this is not to in any way deny the value of cybernetics.


I can't make heads or tails of this paragraph. You know you are in trouble when a sentence implies academic skepticism on the part of a single cell organism.


Jul 28, 2012 -- 3:04PM, Mesothet wrote:

Yet, even here, some more recent research has provided some evidence that amebas might be able to engage in some degree of learning [e.g. pre.aps.org/abstract/PRE/v80/i2/e021926]: if so, this would require some degree of what can only be termed forethought: to learn one needs to somehow choose between perceived alternatives in a manner that takes into account (probabilities of) future benefit to self. Needless to say, this would be—at least via current paradigms—unintelligible from a typical physicalist stance; for the mechanisms that would allow any degree of learning in a unicellular organism (I would add, perception of extrinsic stimuli as well) are anything but currently apparent.


Hardly. The abstract you cited provides an utterly mindless, mechanistic model for how the amoeba accomplishes this task. The authors don't even hint that this learning behavior is a problem for current paradigms. Didn't you notice?


Jul 28, 2012 -- 3:04PM, Mesothet wrote:

Nevertheless, from where I stand, if biological evolution is to be taken into account—and, I fully believe that it should—there will then either be a cline of progression in degree and/or quality of consciousness leading up to humans. . .


Of course. But to be strictly rigorous and methodologically scientific, you have to start with humans and then work back, only attributing aspects of consciousness when you can reliably verify that they are present.


Jul 28, 2012 -- 3:04PM, Mesothet wrote:

. . .or, alternatively, one will presume that a quite miraculous quantum leap has occurred somewhere in the evolutionary history of sentience—an unaccountable transformation from automata to (relatively speaking) autonomous organisms (e.g. the human species).


The emergence of language could be just such a quantum leap, and right now we do not have a consensus model about how language evolved.


Jul 28, 2012 -- 3:04PM, Mesothet wrote:

I do agree with such a view of human consciousness, but find it to be missing a rather important component: that which a) perceives the votes via introspection and b) chooses amongst the votes, maybe most typically in times or relative uncertainty as to benefit and opportunity cost of going this way or that.


I don't think you fully appreciate the point of the model.


Your perception of the "votes" is nothing more than the outcome of the "voting" process. You perceive something when the coalition of neurons representing a perceptual interpretation gains dominance in the workspace of the brain. We can actually see this happening when scanning the brain during experiments.


It isn't that the coalition of neurons representing X presents their victory to a self for entry into conscious experience. Their "victory" just IS your perception.


Similarly, there is no self that chooses which coalition "wins". When a neural population representing a decision gains dominance in the workspace, that just IS your decision. The idea that there is a self inside the brain guiding this process is an illusion.


Jul 28, 2012 -- 3:04PM, Mesothet wrote:

Nevertheless, there will seem to always remain a unified “perceiver” which will also seem endowed with ability of choice—here, between intrinsic voters perceived by such perceiver as to best act/react toward any given stimuli.


That's an illusion. There is no man or woman behind the curtain, so to speak. Just a bunch of mechanisms.

Quick Reply
Cancel
2 years ago  ::  Jul 29, 2012 - 8:02AM #60
Faustus5
Posts: 2,022

Jul 28, 2012 -- 4:04PM, JCarlin wrote:

I would suggest that the functional equivalent of language, that is the ability to meaningfully convey abstract concepts would expand consciousness to those animals that can meaningfully communicate these concepts to each other.  Canine body language is clearly understood by those who care to make the effort, and can convey abstract concepts like  "Lets play" or intricate status levels. 


Language is certainly the deal-maker that permits symbolic abstractions, but I am not sure consciousness in a social sense can be restricted to humans.


Yeah, in the end it is a matter of definition. The point that Dennett seems to be making is that language in humans is such a quantum leap above the representational capabilities of other animals that we should restrict the concept of consciousness to only those beings that have what we have.


Yesterday a study discussed on public radio's excellent "Radiolab" program reinforced this point for me, and now I'm really not sitting on the fence anymore: I'm more convinced than ever that the ability of language to create abstractions is even more powerful and essential to consciousness than I had thought previously.


You would think (or I would have before yesterday) that the abstractions that language would create for us would be around things like "tomorrow", "borrow", "promise" and other like concepts. But colors? No. Inutitively, one would think  that one sees all the colors that exist given your optical capabilities and that's that.


Well, it ain't so simple.


In this study, members of a technologically primitive tribe who did not have a word for the color "blue" were shown a dozen color squares. They were asked to point to the color that was different than the others. The matrix of color squares consisted of various shades of green and one blue square. Amazingly, they were not able to differentiate blue from green. To them, all the squares were the same color.


Now, no one is suggesting that at some raw, purely neurological level, their brains were not capable of registering the fact that the wavelengths from the blue square were different from the green squares.


But apparently, downstream from that purely raw level, where you eventually get to conscious experience, language shapes your experience in ways that are even more fundamental than I had ever imagined. (Trivia: in every language studied so far, blue is always the last color word created. You don't find it in Homer's Greek, for instance.)


I'm reminded by the thesis of that book I mentioned to you last year, The Origins of Consciousness in the Breakdown of the Bicameral Mind. The author argues, on the basis of the analysis of the language in ancient texts, that humans in ancient civilizations weren't fully conscious in the modern sense, even when they had both language and writing. He notes that between the Iliad and the Odyssey, in the former, decisions come from the gods and no one seems to have private thoughts, but in the latter, characters make up their own minds and are capable of deceiving others.


That's going too far, since we know some animals are capable of deceiving others and therefore have a "theory of mind", but it is intriguing to wonder how much the introduction of new concepts makes us radically different than our ancestors or civilizations that don't have the same vocabulary.

Quick Reply
Cancel
Page 6 of 13  •  Prev 1 ... 4 5 6 7 8 ... 13 Next
 
    Viewing this thread :: 0 registered and 1 guest
    No registered users viewing
    Advertisement

    Beliefnet On Facebook