Important Announcement

See here for an important message regarding the community which has become a read-only site as of October 31.

 
Pause Switch to Standard View Consciousness is an individual...
Show More
Loading...
Flag Faustus5 May 18, 2012 9:00 AM EDT

May 18, 2012 -- 8:28AM, newchurchguy wrote:

Science, in the last 60 years, is no longer material based.  It is empirical evidence (as data) based and there is data about non-material phenonmena such as evidenced by logical coding and decoding processes.


Provide a citation from peer reviewed science in which there is any example of logical coding or decoding involving properties or processes that do not involve matter or energy. Good luck with that!


May 18, 2012 -- 8:28AM, newchurchguy wrote:

Tononi's intergrated information - likewise will not qualify for the list of physical units of measure.


Provide a citation from Tononi's actual work in which he provides a specific measure of information that is not measuring some property of matter or energy. Good luck with that!

Flag newchurchguy May 18, 2012 2:41 PM EDT

There is a big difference between your view of "magic matter" and that of a system having a physical level and a logical level - where some of the "space" includes non-material structures such as logic gates.  This is what is in the science literature and what I am reporting.  Your statement that a physical substrate has to be "involved" does not address the subject other than a metaphysical side comment.


 


Just like you explaining how the measurement problem “involved” a physical disturbance of the quantum state -- years after the delayed choice experiment was done and repeated at several  labs.


A. Sloman and L. Floridi head the list of speakers at the big event in honor of Turing this summer


The more abstract the influence the more obscure the relationship with physical structure and processes


Human scientists and engineers have discovered the benefits of using more and more powerful virtual machines whose powers and properties are increasingly remote from those of the underlying physical machinery.


There are several reasons to suspect that similar benefits from use of virtual machinery were “discovered” much earlier by biological evolution.


www.cs.bham.ac.uk/research/projects/coga...


• In that case it may be practically impossible to discover what the machinery is – either bottom-up, by studying the chemical and neurophysical details in the hope of finding their functional powers, or outward-in by seeking correlations between physically detectable internal states and processes and externally observable behaviours in different contexts.


• If so, such bottom-up and outward-in research strategies need to be combined with creative (informed) searching (top-down) in abstract design spaces. – A. Sloman




May 18, 2012 -- 9:00AM, Faustus5 wrote:


May 18, 2012 -- 8:28AM, newchurchguy wrote:

Science, in the last 60 years, is no longer material based.  It is empirical evidence (as data) based and there is data about non-material phenonmena such as evidenced by logical coding and decoding processes.


Provide a citation from peer reviewed science in which there is any example of logical coding or decoding involving properties or processes that do not involve matter or energy. Good luck with that!


May 18, 2012 -- 8:28AM, newchurchguy wrote:

Tononi's intergrated information - likewise will not qualify for the list of physical units of measure.


Provide a citation from Tononi's actual work in which he provides a specific measure of information that is not measuring some property of matter or energy. Good luck with that!





Flag Faustus5 May 18, 2012 3:31 PM EDT

May 18, 2012 -- 2:41PM, newchurchguy wrote:

There is a big difference between your view of "magic matter" and that of a system having a physical level and a logical level - where some of the "space" includes non-material structures such as logic gates.


Please provide a citation from a peer reviewed scientific source which documents a logic gate existing independently of matter or energy. Because if you think your last post did anything of the sort, you are once again deluding yourself.


May 18, 2012 -- 2:41PM, newchurchguy wrote:

This is what is in the science literature and what I am reporting.


You don't "report" anything. You make up nonsense that only you believe and then you cite sources which don't have anything to do with what you actually wrote.


May 18, 2012 -- 2:41PM, newchurchguy wrote:

Just like you explaining how the measurement problem “involved” a physical disturbance of the quantum state -- years after the delayed choice experiment was done and repeated at several  labs.


This may come as news to you, but every experiment in quantum physics involves a physical system whose settings determine the outcome.

Flag newchurchguy May 21, 2012 9:18 AM EDT

May 18, 2012 -- 3:31PM, Faustus5 wrote:


May 18, 2012 -- 2:41PM, newchurchguy wrote:

There is a big difference between your view of "magic matter" and that of a system having a physical level and a logical level - where some of the "space" includes non-material structures such as logic gates.


Please provide a citation from a peer reviewed scientific source which documents a logic gate existing independently of matter or energy. Because if you think your last post did anything of the sort, you are once again deluding yourself.




lol -- of course I have never said that a physical logic gate isn't comprised of matter.  Read my post - I refer exactingly to two levels.  Your response reveals your lack of depth in your reading of the citation from A. Sloman.  What do you think a logic gate does in a compuational system?  The physical logic gate is a representation of actual output of computation and measurements of logical spaces.


You understand that logical simulations may represent physical events - but act all confused when it is pointed out that in science practice; physical events can simulate logical events.


Material logic gates IMPLEMENT functional but, abstract, logic gates. When you read the words of a story book - is the story in the letters or in the meaning of the letters.  You are as literal about physical events, as a Fundamentalist YEC is about their bible. 


read and learn:



A logic gate is an idealized or physical device implementing a Boolean function, that is, it performs a logical operation on one or more logic inputs and produces a single logic output. Depending on the context, the term may refer to an ideal logic gate, one that has for instance zero rise time and unlimited fan-out, or it may refer to a non-ideal physical device.[1] (see Ideal and real op-amps for comparison)


Logic gates are primarily implemented using diodes or transistors acting as electronic switches, but can also be constructed using electromagnetic relays (relay logic), fluidic logic, pneumatic logic, optics, molecules, or even mechanical elements. With amplification, logic gates can be cascaded in the same way that Boolean functions can be composed, allowing the construction of a physical model of all of Boolean logic, and therefore, all of the algorithms and mathematics that can be described with Boolean logic.



I have never taken an Idealist postion - (its all immaterial)-- its been a Neutral Monism position; where physical events and mental events both are from a neutral natural basis.




Flag newchurchguy May 21, 2012 9:33 AM EDT

May 10, 2012 -- 2:58AM, markom wrote:


I say in our body, because I cannot deny that we experience consciousness, being aware of self (I am) nevertheless. Explanation for this comes from purely biological perspective. Sensory (five senses) system attached to a complicated electro-chemical neural network memory called brains gives an illusion of self-awareness by recursive and predictive signal transmission.


Now from this point of view the characteristics of consciousness are subjective, memory dependant, temporal and changing, illusive, yet developing to some extend we don't really know yet.


And yes, the word consciousness is very fundamental and practical to discuss on topic like this. It would be almost impossible to replace it with other descriptive word and still maintain focus and be understandable.




Biological information processing can be seen as consciousness and limited to biological organisms.  However, I would make the case that "higher love" is not biological and has a signal source outside of our "brains".

Flag markom May 23, 2012 5:40 AM EDT

Great discussion here, hot topic it seems :) I need to get back to the 'higher love' and other similar topics, because that is a real question and I have pondered a lot.

Flag stardustpilgrim June 9, 2012 12:45 PM EDT

May 18, 2012 -- 9:00AM, Faustus5 wrote:


May 18, 2012 -- 8:28AM, newchurchguy wrote:

Science, in the last 60 years, is no longer material based.  It is empirical evidence (as data) based and there is data about non-material phenonmena such as evidenced by logical coding and decoding processes.


Provide a citation from peer reviewed science in which there is any example of logical coding or decoding involving properties or processes that do not involve matter or energy. Good luck with that!


May 18, 2012 -- 8:28AM, newchurchguy wrote:

Tononi's intergrated information - likewise will not qualify for the list of physical units of measure.


Provide a citation from Tononi's actual work in which he provides a specific measure of information that is not measuring some property of matter or energy. Good luck with that!




This is always in the back of my mind. What is information which is not related to matter or energy? I don't know if the following applies, but this morning what seems to be an example came to mind.


What is the difference between science as exploration into, and understanding reality, and invention, the movement of the known into the unknown?


Tesla had a great understanding of electricity and was a great inventor. He was a proponent of AC vs Edison's DC. One day he was walking and a poem came to mind. From the words of the poem, on the spot Tesla invented the alternating current motor. A few seconds previous, the alternating current motor didn't exist. Then it existed only in the mind of Tesla. Then he actually built an alternating current motor.


Information appeared in Tesla's mind, from nothing out of thin air. Then Tesla turned the information into the matter and energy of a working electrical motor.


?


sdp 


 

Flag Faustus5 June 9, 2012 1:19 PM EDT

Jun 9, 2012 -- 12:45PM, stardustpilgrim wrote:

Information appeared in Tesla's mind, from nothing out of thin air.


Not out of thin air. He had been thinking about electricity intensely. This would have created physical changes in the networks of neurons which represented aspects of the phenomenon, and when neighboring networks processed the poem, this activated new connections. All physical. Nothing out of thin air.

Flag Beingofone June 21, 2012 1:33 PM EDT

Faust post 8:


At any given time, many modular cerebral networks are active in parallel and process information in an unconscious manner. An information becomes conscious, however, if the neural population that represents it is mobilized by top-down attentional amplification into a brain-scale state of coherent activity that involves many neurons distributed throughout the brain. The long distance connectivity of these "workplace neurons" can, when they are active for a minimal duration, make the information available to a variety of processes including perceptual categorization, long-term memorization, evaluation, and intentional action. We postulate that this global availability of information through the workplace is what we subjectively experience as a conscious state.


--"Towards a cognitive neuroscience of consciousness: basic evidence and a workspace framework", Dehaene and Naccache



I just love it. You want peer review and these wooden heads - just like you - cannot see the elephant in the room. Peer review is the good ole boy network designed to keep a monopoly on real scientific investigation.


This part:


the neural population that represents it is mobilized by top-down attentional amplification





Is almost never addressed by materialistic preachers.


Attention - your attention please. Who or what decides where to focus attention?


No answer? I did not think so.


Its the little guy in your head that decides. The little guy in his head - and its a little guy in the head all the way down - just like turtles.


Here is the answer you get. Its a giant turing machine that acts like and algorithm that leads back to the big bang by cause and effect that decides by external feedback that pushes the data stream.


Sarcasm intended.


Its so simple its hard to understand because it is so very simple.








Flag Faustus5 June 22, 2012 8:19 AM EDT

Jun 21, 2012 -- 1:33PM, Beingofone wrote:

Peer review is the good ole boy network designed to keep a monopoly on real scientific investigation.


Right--when your side loses the debate (and loses in a way that is utterly decisive), the best tactic is to cry "Conspiracy! I'm being oppressed!".


Jun 21, 2012 -- 1:33PM, Beingofone wrote:

This part:


the neural population that represents it is mobilized by top-down attentional amplification



Is almost never addressed by materialistic preachers.


Actually, it is. You only say this because you are not particularly well informed.


Jun 21, 2012 -- 1:33PM, Beingofone wrote:

Attention - your attention please. Who or what decides where to focus attention?


No answer? I did not think so.


You just aren't very well read in this material, so you don't realize that in the very paper I was citing, the following passage appeared addressing the line you are so obsessed with. Here it is:


We should be careful not to take the term "top-down" too literally. Since there is no single organizational summit to the brain, it means only that such attentional amplification is not just modulated "bottom-up" by features internal to the processing stream in which it rides, but also by sideways influences, from competitive, cooperative, collateral activities whose emergent net result is what we may lump together and call top-down influence. In an arena of opponent processes (as in a democracy) the "top" is distributed, not localized. Nevertheless, among the various competitive processes, there are important bifurcations or thresholds that can lead to strikingly different sequels, and it is these differences that best account for our pretheoretical intuitions about the difference between conscious and unconscious events in the mind. If we are careful, we can use "top-down" as an innocent allusion, exploiting a vivid fossil trace of a discarded Cartesian theory to mark the real differences that that theory misdescribed.


Jun 21, 2012 -- 1:33PM, Beingofone wrote:

Its the little guy in your head that decides. The little guy in his head - and its a little guy in the head all the way down - just like turtles.


Again, the combination of both ignorance and arrogance in your post is astonishing. People learn about this problem in basic philosophy courses.   Three separate professors of mine brought it up. You are not in possession of an amazing critique--you are a bearer of very old news. The model I have described does not fall into the homunculus fallacy and in fact the author of the paper I cited has written a lot about it. The passage I quoted above shows the correct interpretation of what is meant by "top down".


Next time you post in this thread, keep in mind that this subject is my academic specialty. You aren't going to surprise me with any new tricks.

Flag Beingofone June 22, 2012 5:08 PM EDT

Faust:


We should be careful not to take the term "top-down" too literally. Since there is no single organizational summit to the brain, it means only that such attentional amplification is not just modulated "bottom-up" by features internal to the processing stream in which it rides, but also by sideways influences, from competitive, cooperative, collateral activities whose emergent net result is what we may lump together and call top-down influence. In an arena of opponent processes (as in a democracy) the "top" is distributed, not localized. Nevertheless, among the various competitive processes, there are important bifurcations or thresholds that can lead to strikingly different sequels, and it is these differences that best account for our pretheoretical intuitions about the difference between conscious and unconscious events in the mind. If we are careful, we can use "top-down" as an innocent allusion, exploiting a vivid fossil trace of a discarded Cartesian theory to mark the real differences that that theory misdescribed.




This is peer reviewed turtles all the way down. Infinite regression is the answer by the materialistic babble crowd. 


Even tho its babble - there is a lot of big words so the real concept (and there is none to be found) is buried under jargon. It just sounds impressive with all the scientific lingo.


The real concept being presented is - it just happens.Its a cooperative effort made, like a democracy - love that line. So my mind is run be a vote - hmmm.


I choose not to think that and instead decide for myself. The above mumbo jumbo is thus refuted.


Yup - nobody and nothing deciided what to think folks . Its a conspiracy of so many properties that the cause cannot be found. So saith the biggest and brightest of physicalism.


And if you question this - you are a conspiracy theory nut job who hates all science.

Flag Faustus5 June 22, 2012 5:41 PM EDT

Jun 22, 2012 -- 5:08PM, Beingofone wrote:

This is peer reviewed turtles all the way down. Infinite regression is the answer by the materialistic babble crowd.


No, not infinite regression. It is not fallacious to propose a homunculus to perform a function so long as the homunculus is dumber than the agent it is a part of. And then you can propose dumber houmunculi-like agents to explain that homunculus, etc., until you get to the point where you are proposing utterly mindless molecular processes. Not an infinite regress.


Jun 22, 2012 -- 5:08PM, Beingofone wrote:

Even tho its babble - there is a lot of big words so the real concept (and there is none to be found) is buried under jargon. It just sounds impressive with all the scientific lingo.


Oh, it went completely over your head. Got it.


Jun 22, 2012 -- 5:08PM, Beingofone wrote:

The real concept being presented is - it just happens.Its a cooperative effort made, like a democracy - love that line. So my mind is run be a vote - hmmm.


I'm afraid that's what cognitive neuroscience more or less has discovered, whether you understand it or not: there are many times when unconscious processes go on in your brain where populations of neurons representing different options or interpretations of experience compete with each other for winner-take-all dominance over the nervous system. When you are making choices that's what is going on inside of you.


Jun 22, 2012 -- 5:08PM, Beingofone wrote:

I choose not to think that and instead decide for myself. The above mumbo jumbo is thus refuted.


It went completely over your head, I get it. And by the way, consensus theories in science don't get refuted by some nobody on the internet--because that's all you are.


Jun 22, 2012 -- 5:08PM, Beingofone wrote:

Yup - nobody and nothing deciided what to think folks . Its a conspiracy of so many properties that the cause cannot be found. So saith the biggest and brightest of physicalism.


It went completely over your head, I get it.

Flag Beingofone June 22, 2012 8:15 PM EDT
The aim of science is not to open the door to infinite wisdom, but to set a limit to infinite error."
Bertolt Brecht


"Errors using inadequate data are much less than those using no data at all."
Charles Babbage
Flag Faustus5 June 24, 2012 9:54 AM EDT

Jun 22, 2012 -- 8:15PM, Beingofone wrote:


The aim of science is not to open the door to infinite wisdom, but to set a limit to infinite error."
Bertolt Brecht


"Errors using inadequate data are much less than those using no data at all."
Charles Babbage



In other words, because you have no comprehension of any of the ideas discussed in the global neuronal workspace model, you went to the net and found some quotes to save face. FAIL. Try again.


For instance, cite the actual inadequate data that these scientists are using and explain why it is inadequate.


Look, I know you can't, I know you won't even attempt it. My point is that you don't know one  thing about cognitive neuroscience, but somehow you think you know more than people who have studied the subject very hard their entire lives. Unbelievable arrogance, that.

Flag Blü June 24, 2012 7:51 PM EDT

Beingofone


I choose not to think that and instead decide for myself. The above mumbo jumbo is thus refuted.


Describe to us the process - the sequence of events - by which you made your choice.


Why can't you make choices unless you have a brain?


What part did your brain play in that choice?


Which of its physical qualities allowed it to play that part? 


How did those qualities work so that you made your decision?


Flag Beingofone June 25, 2012 6:31 PM EDT

Faust:


For instance, cite the actual inadequate data that these scientists are using and explain why it is inadequate



This is quite the challenge. I cannot seem to find a single bit or scrap of 'data' to refute. Very typical when dealing with what causes the thinker to think and make choices by the blind believers in rigid materialism.


If you would point out a single bit of data that tells us where the thinker is, I will be most excited to address the mysterious piece of missing evidence.

Flag Beingofone June 25, 2012 6:38 PM EDT

Jun 24, 2012 -- 7:51PM, Blü wrote:


Describe to us the process - the sequence of events - by which you made your choice.




I pick a topic.


Why can't you make choices unless you have a brain?



Who told your DNA to construct a brain for you to make choices?


What part did your brain play in that choice?



It slowed down the field and stream of energy and data into bits.


Which of its physical qualities allowed it to play that part?



Synaptic gap traps a single bit of data.


How did those qualities work so that you made your decision?



Could you describe with a little more detail or specifics what you are asking here.


I do appreciate the honest questions.

Flag Beingofone June 25, 2012 6:45 PM EDT

Oh BTW Faust, I have a conversation tip for ya. When you have a discussion, you should try to make what is called a point. This can either be a premise or a conclusion but without either, like your entire conversation with me is referred to as 'pointless or futile'.


Just trying to help you out there - you look like you needed help in how to make a point.

Flag Faustus5 June 26, 2012 7:03 AM EDT

Jun 25, 2012 -- 6:31PM, Beingofone wrote:

Faust:


This is quite the challenge. I cannot seem to find a single bit or scrap of 'data' to refute.


Then why did you post a quote about inadequate data when you don't know shit about anything under discussion?


Jun 25, 2012 -- 6:31PM, Beingofone wrote:

Very typical when dealing with what causes the thinker to think and make choices by the blind believers in rigid materialism.


Very typical when dealing with people who actually know nothing about science but pretend to.


Jun 25, 2012 -- 6:31PM, Beingofone wrote:

If you would point out a single bit of data that tells us where the thinker is, I will be most excited to address the mysterious piece of missing evidence.


Why don't you get a basic education on the subject matter so that if someone gave you the data that cognitive neuroscience uses, you might actually have a chance of understanding what it means? I don't cast pearls before swine.

Flag Faustus5 June 26, 2012 7:05 AM EDT

Jun 25, 2012 -- 6:45PM, Beingofone wrote:


Oh BTW Faust, I have a conversation tip for ya. When you have a discussion, you should try to make what is called a point. This can either be a premise or a conclusion but without either, like your entire conversation with me is referred to as 'pointless or futile'.


Just trying to help you out there - you look like you needed help in how to make a point.


Oh, like you did with your post number 34?


Just trying to help you out there - you look like you needed help in how to make a point.

Flag Beingofone June 26, 2012 10:16 PM EDT

Jun 26, 2012 -- 7:03AM, Faustus5 wrote:


Jun 25, 2012 -- 6:31PM, Beingofone wrote:

Faust:


This is quite the challenge. I cannot seem to find a single bit or scrap of 'data' to refute.


Then why did you post a quote about inadequate data when you don't know shit about anything under discussion?




Er uh - because its inadequate. In fact, there is no evidence - none, nada, zero, doughnut etc.


Jun 25, 2012 -- 6:31PM, Beingofone wrote:

Very typical when dealing with what causes the thinker to think and make choices by the blind believers in rigid materialism.


Very typical when dealing with people who actually know nothing about science but pretend to.


Jun 25, 2012 -- 6:31PM, Beingofone wrote:

If you would point out a single bit of data that tells us where the thinker is, I will be most excited to address the mysterious piece of missing evidence.


Why don't you get a basic education on the subject matter so that if someone gave you the data that cognitive neuroscience uses, you might actually have a chance of understanding what it means? I don't cast pearls before swine.



You cannot provide any evidence and you want me to try to refute non existing data.


Then call names when I cannot find any evidence and you cannot seem to point to any.


Do you realize you are not making a lick of sense at all? Probably not; as is my experience with fundamentalist types.

Flag amcolph June 26, 2012 10:34 PM EDT

Jun 25, 2012 -- 6:31PM, Beingofone wrote:


 


This is quite the challenge. I cannot seem to find a single bit or scrap of 'data' to refute. Very typical when dealing with what causes the thinker to think and make choices by the blind believers in rigid materialism.


 




What in the world is "rigid materialism?"  Do you mean "atheist?"  It is not necessary to be an atheist to accept that the mind is a property of the physical brain.

Flag Faustus5 June 27, 2012 7:03 AM EDT

Jun 26, 2012 -- 10:16PM, Beingofone wrote:

r uh - because its inadequate. In fact, there is no evidence - none, nada, zero, doughnut etc.


Spoken like a true know-nothing who hasn't studied the subject in any degree whatsoever.


Jun 25, 2012 -- 6:31PM, Beingofone wrote:

If you would point out a single bit of data that tells us where the thinker is, I will be most excited to address the mysterious piece of missing evidence.


The fact that you would say something like this just reinforces the fact that you fundamentally don't understand how science works. You are acting exactly like a creationist, who blindly thinks that models in science are ever established by "a single bit of data".


That's never the case. Models are established by intensive analysis of tens of thousands of documented experiments and tens of thousands of data points. And you have to have an education in the language of that analysis before you can even evalutate the data itself.


You obviously lack that education, given the way that you reacted to the passages I quoted--it all went right over your head. So it wouldn't matter how many books or papers I suggested you read.  You wouldn't understand a word of it. And yes--you would literally have to read entire books and papers before you even had a chance of qualifying as someone whose opinion mattered.



Jun 25, 2012 -- 6:31PM, Beingofone wrote:

You cannot provide any evidence and you want me to try to refute non existing data.


Put up or shut up.


Go to a library. Get The Cognitive Neuroscience of Consciousness, edited by Stanislas Deehaune. Tell me what mistakes the scientists in that book made.  If you aren't willing to do this, then you've just admitted that you are in over your head and always will be.


And we all know that you won't bother, so just admit it now.

Flag Blü June 28, 2012 10:42 AM EDT

Beingofone


blü: Describe to us the process - the sequence of events - by which you made your choice.
Being: I pick a topic.


The central question is, how, in your view, do you make decisions?  What does what that results in your decision?


So taking for example how you pick a topic, what's the sequence of events that results in the decision?


(As for your other answers, human brains are analog, not binary, so they don't have 'bits'.  And synapses don't 'trap bits' - they're more like relay stations, amplifying the signal.)

Flag newchurchguy June 28, 2012 4:51 PM EDT

Jun 9, 2012 -- 12:45PM, stardustpilgrim wrote:


Tesla had a great understanding of electricity and was a great inventor. He was a proponent of AC vs Edison's DC. One day he was walking and a poem came to mind. From the words of the poem, on the spot Tesla invented the alternating current motor. A few seconds previous, the alternating current motor didn't exist. Then it existed only in the mind of Tesla. Then he actually built an alternating current motor.


Information appeared in Tesla's mind, from nothing out of thin air. Then Tesla turned the information into the matter and energy of a working electrical motor.


?


sdp 





SDP,


Great example.  Information science can reduce much of the event to its thermodynamic parts.  Calories were burned for mental work in the physical brain.  What was gained, in exchange for the calories was structure.  Structure in a mental simulation of the physical world.


I would use the term "information object" as what was constructed in the mind of Tesla.  And physically, this new information object started changing real world probability regarding future events, the moment of its existence in Tesla's mind. 


 

Flag stardustpilgrim June 28, 2012 11:43 PM EDT

Jun 28, 2012 -- 4:51PM, newchurchguy wrote:


Jun 9, 2012 -- 12:45PM, stardustpilgrim wrote:


Tesla had a great understanding of electricity and was a great inventor. He was a proponent of AC vs Edison's DC. One day he was walking and a poem came to mind. From the words of the poem, on the spot Tesla invented the alternating current motor. A few seconds previous, the alternating current motor didn't exist. Then it existed only in the mind of Tesla. Then he actually built an alternating current motor.


Information appeared in Tesla's mind, from nothing out of thin air. Then Tesla turned the information into the matter and energy of a working electrical motor.


?


sdp 





SDP,


Great example.  Information science can reduce much of the event to its thermodynamic parts.  Calories were burned for mental work in the physical brain.  Wat was gained, in exchange for the calories was structure.  Structure in a mental simulation of the physical world.


I would use the term "information object" as what was constructed in the mind of Tesla.  And physically, this new information object started changing real world probability regarding future events, the moment of its existence in Tesla mind. 


 




Yea........information seems causal......... :-) ............


sdp

Flag Blü July 5, 2012 2:14 AM EDT

stardust


Information appeared in Tesla's mind, from nothing out of thin air.


Hardly.


The brain has function centers - modules, if you like - for storing, recalling, comparing, considering, talking and reading, body movement, resolving priority conflicts, and so on.  The modules communicate with each other, and because this communication is itself a brain function (as distinct from coincidental) we're good at it - use it for jokes, art, inventions, problem solving and so on.  Taking Tesla's account at face value, the ingredients of the discovery were already in his mind when under the stimulus of the song he perceived a connection between individual things he already knew.


If he hadn't known already those things, he couldn't have made the necessary connection.


Flag stardustpilgrim July 6, 2012 7:08 PM EDT

Jul 5, 2012 -- 2:14AM, Blü wrote:


stardust


Information appeared in Tesla's mind, from nothing out of thin air.


Hardly.


The brain has function centers - modules, if you like - for storing, recalling, comparing, considering, talking and reading, body movement, resolving priority conflicts, and so on.  The modules communicate with each other, and because this communication is itself a brain function (as distinct from coincidental) we're good at it - use it for jokes, art, inventions, problem solving and so on.  Taking Tesla's account at face value, the ingredients of the discovery were already in his mind when under the stimulus of the song he perceived a connection between individual things he already knew.


If he hadn't known already those things, he couldn't have made the necessary connection.





But would you call what was going on inside Tesla's head information about how (aspects of) the universe is structured? And was not that information to a very great extent responsible for the world we have today? (Use of AC instead of Edison's DC, the revolution of the AC motor and all that entails, etc. ......).........


Does that not show that information is causal?


sdp 

Flag Blü July 6, 2012 9:46 PM EDT

stardust


But would you call what was going on inside Tesla's head information about how (aspects of) the universe is structured?


That's one of those what-do-you-mean-by-information questions, no?


Tesla would have perceived the relationship between concepts he already had.  I imagine it would be a bit like walking the dog when suddenly you realize what you can give your mother for her birthday, and a bit like staring at a chess puzzle and having that AHA! moment - both inspiration and cultivated ways and talents of thinking.


I very strongly doubt that he was working from information=data - rather information=concepts he already had.



Does that not show that information is causal?


Not information=data.  Information=stimulating perceptions and ideas, sure.



Flag Mesothet July 25, 2012 12:09 AM EDT

Though a little hesitant, I’d like to re-jump-start this thread with some opinions and perspectives.


The potential stipulation that neurological data must be viewed as somehow proving the physicalist stance seems . . . well . . . not scientific. Unless one presumes that science in any way is founded upon absolutist convictions of reality.


A maybe more accurate assessment may be that the physicalist paradigm currently most subscribed to in the sciences serves its pragmatic function relatively well in addressing most data. The rest, as I presume all are aware of, will be confirmation biases toward this or that metaphysical outlook.


I have known of neuroscientists that not only were not quite ethical in their authority over others but who also engaged in studies that—for anyone who knows anything about biological evolution—were (and I presume still are) utterly nonsensical. More specifically, the neurological study of bird CNS’s in attempts to better comprehend what has been termed the human “language instinct”. Not only do birds and mammals not share any significant homologous evolution but they also do not hold any significant degree of analogous evolution. Yet, being considered a sexy field, funding was provided for such. Point being, it is quite possible that not all neuroscientists are extremely privy to what in the humanities is termed philosophical reasoning.


Data will be data however.


[Of course, there will be honest neuroscientists galore—but few will partake of asking big questions such as what consciousness truly is: most will work in specialized fields.]


As a relevant analogy to this topic of consciousness, were one to ask “What is true objectivity?” what may anyone honestly answer but either a) “no one knows” or b) immaturely provide a subjectively biased stipulation which, upon analysis, will ultimately be false? Nevertheless, does this in any way then imply that “there does not exist such a thing as true objectify”?


One may likewise assess the reality of consciousness. It may be made up of “voters” within the mind (David Hume’s commonwealth hypothesis of consciousness has been around for quite some time—though he himself was not fully satisfied with it), but there will yet remain some aspect of self which will perceive/interpret.


This aspect of self will not be limited to only humans or organisms with a CNS/brain.


As with true objectivity, all sapient persons will know of it as being real but none will know what the thing which perceives per se truly is.


As a perspective, using the Hindu concept of Maya (illusion) as premise of informational reality, one could affirm that on an ultimate level of reality only consciousnesses (that which perceives) are real and all else is illusion within which consciousnesses are enveloped and limited by. This, to me, seems to be a very simple—and very ancient—argument. I bring it up for the following two reasons: 1) it can in no way be falsified via data of any kind, and 2) it can hold full empirical integrity. Hence, this alternative perspective to that of physicalism can be fully in tune with such realities as neurological data, theory of biological evolution, etc.


One could say that such currently “alternative” metaphysical perspective will be—just as physicalism is—fully liable to its own confirmation biases. However, someone such as myself may contend that it will have far more explanatory power.


[I hope I haven’t placed my foot into my mouth with all this; but, if I’m in any way wrong, I’d like to learn how . . . ]

Flag Faustus5 July 25, 2012 6:29 PM EDT

Jul 25, 2012 -- 12:09AM, Mesothet wrote:

The potential stipulation that neurological data must be viewed as somehow proving the physicalist stance seems . . . well . . . not scientific.


It's more like this: all of science proceeds under the reigning assumption that physicalism/materialism is true. So long as we are rewarded with a constant bounty of successful models within this paradigm, that initial assumption is deemed an accurate gauge of the ontology of the universe.


It could always turned out otherwise. For instance, critics of materialism in past centuries claimed that it would be impossible to synthesize organic matter from non-organic matter, and if that had turned out to be true, materialism would have been falsified.


Jul 25, 2012 -- 12:09AM, Mesothet wrote:

I have known of neuroscientists that not only were not quite ethical in their authority over others but who also engaged in studies that—for anyone who knows anything about biological evolution—were (and I presume still are) utterly nonsensical. More specifically, the neurological study of bird CNS’s in attempts to better comprehend what has been termed the human “language instinct”. Not only do birds and mammals not share any significant homologous evolution but they also do not hold any significant degree of analogous evolution. Yet, being considered a sexy field, funding was provided for such. Point being, it is quite possible that not all neuroscientists are extremely privy to what in the humanities is termed philosophical reasoning.


Actually, studying bird nervous systems for clues about language makes perfect sense, for these reasons:


1. Even though mammals and birds diverged from a common ancestor quite a long time ago, evolution operates on what is already there, and it could be that the origins of bird song in the neural architecture of their brains was built on the same ancient structures that human language evolved from.


2. Even if 1 were not the case--and it seems rather likely that it would be--convergent evolution happens, and it would be valuable to see if we could learn lessons from how bird song is structured in their brains that might be carried over in the brains of humans. We might be encouraged from what we found to look in places within the human brain that had been neglected previously.


3. Recent studies of parrots and other birds with the power to mimic has suggested that they might understand, in a limited way, how to use the words we thought they were just randomly aping. I want to know if that is true and what, in their brains, allows them to do so.


Jul 25, 2012 -- 12:09AM, Mesothet wrote:

[Of course, there will be honest neuroscientists galore—but few will partake of asking big questions such as what consciousness truly is: most will work in specialized fields.]


Well, in the last few decades many of them have become much more bold and have tried to piece things together that the specialists were concentrating on. That's how Baars put together the global neuronal workspace model.


Jul 25, 2012 -- 12:09AM, Mesothet wrote:

As a relevant analogy to this topic of consciousness, were one to ask “What is true objectivity?” what may anyone honestly answer but either a) “no one knows” or b) immaturely provide a subjectively biased stipulation which, upon analysis, will ultimately be false? Nevertheless, does this in any way then imply that “there does not exist such a thing as true objectify”?


I think we can leave philosophers to nit pick over such issues with the confidence that if they ever resolve them, that there would be scant consequence for the science of mind. Scientists are pretty good already at figuring out when a consensus is justified (i.e., the model or claim in question is "objective") and when one isn't.


Jul 25, 2012 -- 12:09AM, Mesothet wrote:

One may likewise assess the reality of consciousness. It may be made up of “voters” within the mind (David Hume’s commonwealth hypothesis of consciousness has been around for quite some time—though he himself was not fully satisfied with it), but there will yet remain some aspect of self which will perceive/interpret.


And that, too, is done by coalitions of networks of neurons. There is no self in there guiding this. The self is what emerges from these "votes", after the fact.

Flag farragut July 25, 2012 8:50 PM EDT

" Recent studies of parrots and other birds with the power to mimic has suggested that they might understand, in a limited way, how to use the words we thought they were just randomly aping. I want to know if that is true and what, in their brains, allows them to do so. "


 


I am quite persuaded that the African Grey of the family of the late Charles Fiterman pretty much proved the case.

Flag Mesothet July 26, 2012 12:27 AM EDT

Jul 25, 2012 -- 8:50PM, farragut wrote:


" Recent studies of parrots and other birds with the power to mimic has suggested that they might understand, in a limited way, how to use the words we thought they were just randomly aping. I want to know if that is true and what, in their brains, allows them to do so. "


 


I am quite persuaded that the African Grey of the family of the late Charles Fiterman pretty much proved the case.




Ah . . . the Grey parrot: my favorite in the genus. Ethological research of many wild bird-species in their natural habitat seems to typically indicate bird intelligence as well: the Corvus genius as just one example I know a little of that now comes to mind.


:)  Never claimed that all bird-species were relatively unintelligent.

Flag Mesothet July 26, 2012 12:34 AM EDT

Faustus5, 


So that I don’t unnecessarily jump to erroneous presumptions, I’d like to know whether you take consciousness to be strictly limited to those organisms which are endowed with a CNS.


I, at least for now, take consciousness—non-anthropocentrically contemplated—to denote the act of perceiving anything other than that which is doing the perceiving (I take this denotation of consciousness to hold valid irrelevant to metaphysical/ontological stance as to what such “perceiver” might in truth be).


I grant this is a very generalized understanding of consciousness; however, I view the act of perception as impossible without the concordant faculty of interpretation. For example, if an ameba is in any way presumed able to perceive then it will also be required to interpret—to try to keep this simple—external stimuli: at minimum, this interpretation of external stimuli will result in the given organism’s subjective meaning of it as something worthy of either positive or negative valency (i.e., in either attraction toward or avoidance of given perceived stimuli). Many an organism devoid of a CNS will typically be assessed as capable of some form of perception—I myself will contend this of all organisms as well as cells pertaining to multicellular organisms. The assessment that perception requires interpretation of information will, then, result in the assessment that perception will be equivalent to some at least minimal degree of consciousness.


If there are disagreements, please outline the reasons for such. In this case, also please describe which organisms, in your opinion, will have such a thing as consciousness and which will not.


This clarification, I believe, may help in better common understanding.


------------------


It's more like this: all of science proceeds under the reigning assumption that physicalism/materialism is true.


To me this overlooks a significant portion of empirical scientist who, aside from valuing the objectivity of empirical evidence, are also theists of one form or another. I’m not familiar with statistics as to such populace of theistic (as compared with atheistic) scientists in all branches of empirical sciences—but I am familiar with the reality that such do at least exist in the cognitive sciences. To my best current understanding, most theists are not physicalists by default—and science seems to instead proceed under the optimal intent of scientists to best uncover that which is ultimately true: regardless of ontological assumptions of reality.


The following seems to be tangential to the topic of consciousness. However, so that I don’t appear overly ignorant of the topic matter:



1. Even though mammals and birds diverged from a common ancestor quite a long time ago, evolution operates on what is already there, and it could be that the origins of bird song in the neural architecture of their brains was built on the same ancient structures that human language evolved from.



 Where this to be a genetics issue of which genes influence which cognitive behaviors, I might agree. For a shortcut explanation, dinosaurs and mammals that both coexisted way back when  were drastically different in both physical phenotype and behavioral phenotype. Birds which evolved from dinosaurs and placental mammals --> primates which evolved from small mammals that survived dinosaurs have gone through drastically different evolution. The likelihood that bird and human “instinct for grammar” (which great apes do not share) hold a common ancestry seems to be rather improbable, if that much.



2. Even if 1 were not the case--and it seems rather likely that it would be--convergent evolution happens, and it would be valuable to see if we could learn lessons from how bird song is structured in their brains that might be carried over in the brains of humans. We might be encouraged from what we found to look in places within the human brain that had been neglected previously.



 By this argument of convergent evolution, one would also be encouraged to spend the valuable recourses of both time and funding in studying squid eyesight so as to discover new insights as concerns the occipital lobes, etc., in vertebrates--but I fail to see the reason for such research. After some time (only as a measly Research Tech) in a neuroscience lab working on finch brains, I’m fairly certain that--aside from a bilateralization of  CNS hemispheres—there is no parallel between aves and mammalian brain structure (e.g. lobes, Wernicke & Broca, etc.)  


#3, on the other hand, does not deal with the issue initially addressed.


Again, I’ve replied with my best opinions to not seem as though I was merely pulling stuff out of nowhere. But, all this may be relatively beside the point of consciousness per se. For now, maybe we can agree to disagree on the benefits/detriments of aves-species research for the specific purpose of understanding Homo Sapiens’ capacity of instinctive grammar during the linguistic critical period of development.


Scientists are pretty good already at figuring out when a consensus is justified (i.e., the model or claim in question is "objective") and when one isn't.


Then, between us non-professional scientists, do we at least agree that “objective” signifies “impartial”: which, in a nonhyperbolic sense, implies ever greater reduction of subjectivity (aka. bias)?


And that, too, is done by coalitions of networks of neurons. There is no self in there guiding this. The self is what emerges from these "votes", after the fact.


Aside from “self” being a very relative term to that which may be intended by a multitude of discordant perspectives, it first seems beneficial to better understand your position: is this to you best data-supported opinion or blatant, incontrovertible fact?


At least for now, you seem to have overlooked the alternative model of reality proposed at the end of post #51. This is, after all, the alternative I’d like to uphold as being at least a contending possibility to that of physicalism.

Flag Faustus5 July 28, 2012 11:19 AM EDT

Jul 26, 2012 -- 12:34AM, Mesothet wrote:

So that I don’t unnecessarily jump to erroneous presumptions, I’d like to know whether you take consciousness to be strictly limited to those organisms which are endowed with a CNS.


Yes. Or the functional equivalent if you don't want to rule out aliens or artificial intelligence. The concept doesn't make much sense otherwise


Jul 26, 2012 -- 12:34AM, Mesothet wrote:

I, at least for now, take consciousness—non-anthropocentrically contemplated—to denote the act of perceiving anything other than that which is doing the perceiving (I take this denotation of consciousness to hold valid irrelevant to metaphysical/ontological stance as to what such “perceiver” might in truth be).


I think it is best to let perception just be perception and not have so wide a concept of consciousness that it ends up applying to amoebas.


You want the word to pick out something that makes us humans different from other animals. Start with beings you know are conscious--us. Then figure out what makes us conscious. Then see if those things are measurably present in other animals.


Jul 26, 2012 -- 12:34AM, Mesothet wrote:

If there are disagreements, please outline the reasons for such. In this case, also please describe which organisms, in your opinion, will have such a thing as consciousness and which will not.


Right now I am persuaded by people like Dan Dennett that only human beings who can speak and understand a language are conscious. I've found his arguments that human level consciousness is only made possible through the acquisition of language to be mildly convincing. Infants and animals have something important going on, but language is enormously important.


This is in part a consequence of the methodology I recommended earlier--start with beings you know are conscious, see what makes them so, and then only widen the circle when you have measurable criteria.


But--I'm a fence sitter on this one. Something seems wrong about making language the deal-maker, but I can't articulate what seems wrong and I can't rebut Dennett's arguments, though I'd like to.


Jul 26, 2012 -- 12:34AM, Mesothet wrote:

To me this overlooks a significant portion of empirical scientist who, aside from valuing the objectivity of empirical evidence, are also theists of one form or another.


Scientists who are theists are going to take physicalism or materialism as a methodology when they are doing their work. They will believe that in fact materialism in the end isn't true and that there are realms science cannot measure or detect. But they know--if they are good scientists--that when doing science, only natural explanations are allowed.


Jul 26, 2012 -- 12:34AM, Mesothet wrote:

The likelihood that bird and human “instinct for grammar” (which great apes do not share) hold a common ancestry seems to be rather improbable, if that much.


I disagree. Remember, they both started from a common ancestor and live on the same planet with similar selection pressures. Evolution can't turn just anything into a mechanism for verbal utterances and perceptions. It has to work with mechanisms already there doing something related to verbal utterances and perceptions. So it makes sense to see if there are commonalities between the neural architecture for bird song and the so-called language instinct. Evolutionary logic suggests there should be.


Jul 26, 2012 -- 12:34AM, Mesothet wrote:

By this argument of convergent evolution, one would also be encouraged to spend the valuable recourses of both time and funding in studying squid eyesight so as to discover new insights as concerns the occipital lobes, etc., in vertebrates--but I fail to see the reason for such research.


Biologists look into this sort of thing because they want to understand how convergent evolution happens.


Jul 26, 2012 -- 12:34AM, Mesothet wrote:

After some time (only as a measly Research Tech) in a neuroscience lab working on finch brains, I’m fairly certain that--aside from a bilateralization of  CNS hemispheres—there is no parallel between aves and mammalian brain structure (e.g. lobes, Wernicke & Broca, etc.)


Time will tell. It is still worthwhile figuring out how bird brains implement bird song even if we don't thereby learn something about ourselves.


Jul 26, 2012 -- 12:34AM, Mesothet wrote:

Then, between us non-professional scientists, do we at least agree that “objective” signifies “impartial”: which, in a nonhyperbolic sense, implies ever greater reduction of subjectivity (aka. bias)?


That sounds like a reasonable sound bite to me!


Jul 26, 2012 -- 12:34AM, Mesothet wrote:

Aside from “self” being a very relative term to that which may be intended by a multitude of discordant perspectives, it first seems beneficial to better understand your position: is this to you best data-supported opinion or blatant, incontrovertible fact?


I think that overwhelmingly, the best interpretation of the data is that the self is an abstraction which we spin out of concrete instances of behavior. A center of narrative gravity, if you will.


Jul 26, 2012 -- 12:34AM, Mesothet wrote:

At least for now, you seem to have overlooked the alternative model of reality proposed at the end of post #51. This is, after all, the alternative I’d like to uphold as being at least a contending possibility to that of physicalism.


Models have to be based on concrete, testable reality. The alternative you suggested didn't meet that standard as far as I could tell. That's why you don't actually see it "competing" in the sciences with physicalism.

Flag Mesothet July 28, 2012 3:04 PM EDT

So as to first take this out of the way: As for me, I will agree to disagree on our differences of reasoning concerning biological evolution & neuroscience research on this thread. I yet disagree on many a level. There’s one book I recall reading (long time ago), for example, which more or less stipulated that Homo Sapiens typically stay together in relationships only for aprox. 4 years because we shared convergent evolution with birds. (There are more confounding variables—and premature conclusions—to this than can be mentioned, IMO). I suppose this general topic in itself would make for a good conversation; I, however, am quite interested to continue the topic of consciousness.


Yes. Or the functional equivalent if you don't want to rule out aliens or artificial intelligence. The concept doesn't make much sense otherwise


Myself, I’m more than doubtful that strong AI can ever be accomplished: At core issue to me is the apparent impossibility that trust (for oneself, for other(s), for context, for that which is (e.g., ontologically) real, etc.) can ever be programmed. I.e, the arriving at and/or maintenance of cognitive certainty within a realm of objective uncertainty (cf. academic skepticism) seems a) intrinsic to the reality of any consciousness [to me, even when contemplated at the level of an ameba*] and b) arguably indicative of some degree of ontologically valid freewill (between both intrinsic and extrinsic alternatives) which, if true, can only be best approximated via goal-driven chaos algorithms—but never fully actualized. Even from this vantage, however, this is not to in any way deny the value of cybernetics.


* Obviously, amebas are to be assessed as endowed with far more apriori behaviors than a species such as humankind. Yet, even here, some more recent research has provided some evidence that amebas might be able to engage in some degree of learning [e.g. pre.aps.org/abstract/PRE/v80/i2/e021926]: if so, this would require some degree of what can only be termed forethought: to learn one needs to somehow choose between perceived alternatives in a manner that takes into account (probabilities of) future benefit to self. Needless to say, this would be—at least via current paradigms—unintelligible from a typical physicalist stance; for the mechanisms that would allow any degree of learning in a unicellular organism (I would add, perception of extrinsic stimuli as well) are anything but currently apparent.


You want the word to pick out something that makes us humans different from other animals.


I can well appreciate the importance of dealing with human consciousness above all others. Nevertheless, from where I stand, if biological evolution is to be taken into account—and, I fully believe that it should—there will then either be a cline of progression in degree and/or quality of consciousness leading up to humans or, alternatively, one will presume that a quite miraculous quantum leap has occurred somewhere in the evolutionary history of sentience—an unaccountable transformation from automata to (relatively speaking) autonomous organisms (e.g. the human species).


This latter view, to me, has always seemed to be only a more deeply buried form of Cartesian Dualism . . . all the more odd because it assumes the monistic label of physicalism.


But--I'm a fence sitter on this one. Something seems wrong about making language the deal-maker, but I can't articulate what seems wrong and I can't rebut Dennett's arguments, though I'd like to.


Familiar with this argument (thought I have not read most of Dennett’s publications and arguments—but have browsed synopses online), I currently can only try to present the issue in the form of a choice of belief:


I currently presume that this cannot be empirically demonstrated outside of common agreement on introspective analysis of subjective reality as it applies to humans. Personally, I at times hold meaningful ideas (etc.) which I find relatively impossible to address via language—hence, ineffable known concepts. To me, artistic expression will be one means of making such otherwise ineffable subjective realities conveyable to myself and others: e.g. poetry, painting, music, etc. Given this personal reality, I then ask myself weather such (what I’ll term in semi-agreement with Kantian philosophy) noumena of thought [e.g. knowing one knows something that is on “the tip of one’s tongue” yet in no way knowing what it may be in any phenomenal sense—hence something grasped that yet does not hold any phenomenal reality to one’s self] will be fully dependent upon phenomena (e.g. language) or, otherwise, whether language is merely like pre-structured boxes utilized to stuff such noumena of thought into so as to best attempt to convey info to oneself or another? The latter view, if upheld, will discredit the “consciousness is fully dependent upon language” perspective. Also noteworthy, many a lesser animal will convey quite meaningful information via non-verbal communication; hence begging the question of what “language” itself might then be construed to signify. For example, bees will often adequately convey direction and distance of pollen to other bees via body-language. Will this then be construed as a form of language between bees? Thought not the easiest topic to quickly resolve, I myself do uphold that language is only secondary to consciousness.


Scientists who are theists are going to take physicalism or materialism as a methodology when they are doing their work.


This may be right. If so, then such theists will primarily be of a dualistic reality of ontology. I myself have known of Hindus working in neuroscience (many science driven folks in India as well); in which case, they would maybe have likely been of a monistic ontology as previously articulated (what in the west has traditionally been termed one of philosophical idealism). There is nothing within such a latter worldview that would deny the reality of objective reality, however.


I think that overwhelmingly, the best interpretation of the data is that the self is an abstraction which we spin out of concrete instances of behavior. A center of narrative gravity, if you will.


:) Sounds quite Buddha-like. Though he did state that (in his opinion) the truth is neither that we have a self/soul nor that we don’t have a self/soul. The middle path, though most challenging, here being considered also most valid.


Maybe we could further the commonwealth theory of consciousness—that of at least human consciousness being comprised out of votes. I do agree with such a view of human consciousness, but find it to be missing a rather important component: that which a) perceives the votes via introspection and b) chooses amongst the votes, maybe most typically in times or relative uncertainty as to benefit and opportunity cost of going this way or that.


So as to stay constant to the metaphor, this missing variable I here address I do presume to itself be resultant of the mind’s voters: this as a unification of otherwise disparate voters, to which individual voters could be affixed and from which individual voters could dislodge.  Nevertheless, there will seem to always remain a unified “perceiver” which will also seem endowed with ability of choice—here, between intrinsic voters perceived by such perceiver as to best act/react toward any given stimuli. (Such interpretation of human consciousness, to me, does seem to be in line with psychoneurological studies of the human brain’s operations.)


[Needless to say, all this is just best current opinions]

Flag JCarlin July 28, 2012 4:04 PM EDT

Jul 28, 2012 -- 11:19AM, Faustus5 wrote:

Right now I am persuaded by people like Dan Dennett that only human beings who can speak and understand a language are conscious. I've found his arguments that human level consciousness is only made possible through the acquisition of language to be mildly convincing. Infants and animals have something important going on, but language is enormously important.


This is in part a consequence of the methodology I recommended earlier--start with beings you know are conscious, see what makes them so, and then only widen the circle when you have measurable criteria.


But--I'm a fence sitter on this one. Something seems wrong about making language the deal-maker, but I can't articulate what seems wrong and I can't rebut Dennett's arguments, though I'd like to.


I would suggest that the functional equivalent of language, that is the ability to meaningfully convey abstract concepts would expand consciousness to those animals that can meaningfully communicate these concepts to each other.  Canine body language is clearly understood by those who care to make the effort, and can convey abstract concepts like  "Lets play" or intricate status levels. 


Language is certainly the deal-maker that permits symbolic abstractions, but I am not sure consciousness in a social sense can be restricted to humans. 

Flag Faustus5 July 29, 2012 7:39 AM EDT

Jul 28, 2012 -- 3:04PM, Mesothet wrote:

Myself, I’m more than doubtful that strong AI can ever be accomplished: At core issue to me is the apparent impossibility that trust (for oneself, for other(s), for context, for that which is (e.g., ontologically) real, etc.) can ever be programmed.


I doubt that anyone in the field of AI is even remotely engaged by such issues.


Jul 28, 2012 -- 3:04PM, Mesothet wrote:

I.e, the arriving at and/or maintenance of cognitive certainty within a realm of objective uncertainty (cf. academic skepticism) seems a) intrinsic to the reality of any consciousness [to me, even when contemplated at the level of an ameba*] and b) arguably indicative of some degree of ontologically valid freewill (between both intrinsic and extrinsic alternatives) which, if true, can only be best approximated via goal-driven chaos algorithms—but never fully actualized. Even from this vantage, however, this is not to in any way deny the value of cybernetics.


I can't make heads or tails of this paragraph. You know you are in trouble when a sentence implies academic skepticism on the part of a single cell organism.


Jul 28, 2012 -- 3:04PM, Mesothet wrote:

Yet, even here, some more recent research has provided some evidence that amebas might be able to engage in some degree of learning [e.g. pre.aps.org/abstract/PRE/v80/i2/e021926]: if so, this would require some degree of what can only be termed forethought: to learn one needs to somehow choose between perceived alternatives in a manner that takes into account (probabilities of) future benefit to self. Needless to say, this would be—at least via current paradigms—unintelligible from a typical physicalist stance; for the mechanisms that would allow any degree of learning in a unicellular organism (I would add, perception of extrinsic stimuli as well) are anything but currently apparent.


Hardly. The abstract you cited provides an utterly mindless, mechanistic model for how the amoeba accomplishes this task. The authors don't even hint that this learning behavior is a problem for current paradigms. Didn't you notice?


Jul 28, 2012 -- 3:04PM, Mesothet wrote:

Nevertheless, from where I stand, if biological evolution is to be taken into account—and, I fully believe that it should—there will then either be a cline of progression in degree and/or quality of consciousness leading up to humans. . .


Of course. But to be strictly rigorous and methodologically scientific, you have to start with humans and then work back, only attributing aspects of consciousness when you can reliably verify that they are present.


Jul 28, 2012 -- 3:04PM, Mesothet wrote:

. . .or, alternatively, one will presume that a quite miraculous quantum leap has occurred somewhere in the evolutionary history of sentience—an unaccountable transformation from automata to (relatively speaking) autonomous organisms (e.g. the human species).


The emergence of language could be just such a quantum leap, and right now we do not have a consensus model about how language evolved.


Jul 28, 2012 -- 3:04PM, Mesothet wrote:

I do agree with such a view of human consciousness, but find it to be missing a rather important component: that which a) perceives the votes via introspection and b) chooses amongst the votes, maybe most typically in times or relative uncertainty as to benefit and opportunity cost of going this way or that.


I don't think you fully appreciate the point of the model.


Your perception of the "votes" is nothing more than the outcome of the "voting" process. You perceive something when the coalition of neurons representing a perceptual interpretation gains dominance in the workspace of the brain. We can actually see this happening when scanning the brain during experiments.


It isn't that the coalition of neurons representing X presents their victory to a self for entry into conscious experience. Their "victory" just IS your perception.


Similarly, there is no self that chooses which coalition "wins". When a neural population representing a decision gains dominance in the workspace, that just IS your decision. The idea that there is a self inside the brain guiding this process is an illusion.


Jul 28, 2012 -- 3:04PM, Mesothet wrote:

Nevertheless, there will seem to always remain a unified “perceiver” which will also seem endowed with ability of choice—here, between intrinsic voters perceived by such perceiver as to best act/react toward any given stimuli.


That's an illusion. There is no man or woman behind the curtain, so to speak. Just a bunch of mechanisms.

Flag Faustus5 July 29, 2012 8:02 AM EDT

Jul 28, 2012 -- 4:04PM, JCarlin wrote:

I would suggest that the functional equivalent of language, that is the ability to meaningfully convey abstract concepts would expand consciousness to those animals that can meaningfully communicate these concepts to each other.  Canine body language is clearly understood by those who care to make the effort, and can convey abstract concepts like  "Lets play" or intricate status levels. 


Language is certainly the deal-maker that permits symbolic abstractions, but I am not sure consciousness in a social sense can be restricted to humans.


Yeah, in the end it is a matter of definition. The point that Dennett seems to be making is that language in humans is such a quantum leap above the representational capabilities of other animals that we should restrict the concept of consciousness to only those beings that have what we have.


Yesterday a study discussed on public radio's excellent "Radiolab" program reinforced this point for me, and now I'm really not sitting on the fence anymore: I'm more convinced than ever that the ability of language to create abstractions is even more powerful and essential to consciousness than I had thought previously.


You would think (or I would have before yesterday) that the abstractions that language would create for us would be around things like "tomorrow", "borrow", "promise" and other like concepts. But colors? No. Inutitively, one would think  that one sees all the colors that exist given your optical capabilities and that's that.


Well, it ain't so simple.


In this study, members of a technologically primitive tribe who did not have a word for the color "blue" were shown a dozen color squares. They were asked to point to the color that was different than the others. The matrix of color squares consisted of various shades of green and one blue square. Amazingly, they were not able to differentiate blue from green. To them, all the squares were the same color.


Now, no one is suggesting that at some raw, purely neurological level, their brains were not capable of registering the fact that the wavelengths from the blue square were different from the green squares.


But apparently, downstream from that purely raw level, where you eventually get to conscious experience, language shapes your experience in ways that are even more fundamental than I had ever imagined. (Trivia: in every language studied so far, blue is always the last color word created. You don't find it in Homer's Greek, for instance.)


I'm reminded by the thesis of that book I mentioned to you last year, The Origins of Consciousness in the Breakdown of the Bicameral Mind. The author argues, on the basis of the analysis of the language in ancient texts, that humans in ancient civilizations weren't fully conscious in the modern sense, even when they had both language and writing. He notes that between the Iliad and the Odyssey, in the former, decisions come from the gods and no one seems to have private thoughts, but in the latter, characters make up their own minds and are capable of deceiving others.


That's going too far, since we know some animals are capable of deceiving others and therefore have a "theory of mind", but it is intriguing to wonder how much the introduction of new concepts makes us radically different than our ancestors or civilizations that don't have the same vocabulary.

Flag Mesothet July 29, 2012 10:53 AM EDT

We indeed seem to be talking past each other.


There is no man or woman behind the curtain, so to speak. Just a bunch of mechanisms.


Condensed though it was, my argument makes no mention of a homunculus. Seems as though ready held conclusions are clouding the capacity of impartiality.


Taking things way back . . .


-- Do you presume that humans are automata?


[from wiktionary: A machine or robot designed to follow a precise sequence of instructions.]

Flag Faustus5 July 29, 2012 11:05 AM EDT

Jul 29, 2012 -- 10:53AM, Mesothet wrote:


We indeed seem to be talking past each other.


There is no man or woman behind the curtain, so to speak. Just a bunch of mechanisms.


Condensed though it was, my argument makes no mention of a homunculus.


It doesn't mention one as such, but it doesn't need to, as in several passages a "perceiver" or "chooser" is posited who is somehow, mysteriously, independent of the "voting" coalitions of neurons. Even if you reject dualism, it is hard to get rid of the metaphors and habits of thought that dualism endowed us with.


Jul 29, 2012 -- 10:53AM, Mesothet wrote:

-- Do you presume that humans are automata?


[from wiktionary: A machine or robot designed to follow a precise sequence of instructions.]



Humans are made out of automata.

Flag Mesothet July 29, 2012 11:11 AM EDT

Humans are made out of automata.


 This answer fully deviates from the question. Again:


 -- Do you presume that the human per se “IS” an automaton? 

Flag Faustus5 July 29, 2012 11:23 AM EDT

Jul 29, 2012 -- 11:11AM, Mesothet wrote:


Humans are made out of automata.


 This answer fully deviates from the question. Again:


 -- Do you presume that the human per se “IS” an automaton? 



I think the word "automata" is best used to contrast us with simpler agents. In some very technical sense, if we are made of automata, then everything we do is the output of their mindless algorithms, and you could if you wanted say that this means we are automata as well. But I don't think that is very useful. I'm very different from a thermostat or the AI that controls the behavior of a video game character not under control of a human. So I would prefer to use "automata" to describe them in contrast to me.

Flag Mesothet July 29, 2012 11:44 AM EDT

I think the word "automata" is best used to contrast us with simpler agents. In some very technical sense, if we are made of automata, then everything we do is the output of their mindless algorithms, and you could if you wanted say that this means we are automata as well. But I don't think that is very useful. I'm very different from a thermostat or the AI that controls the behavior of a video game character not under control of a human. So I would prefer to use "automata" to describe them in contrast to me.


 I’m trying to presume the best and not interpret all this as a big smoke-screen to cloud the pivotal issue with needless intellectualized complications.


So, presuming we both hold attraction toward impartiality/objectivity:


If, when technically approached, a human indeed “IS” an automaton, how then do you approach the very issue of consciousness in an impartial manner?


Absurd thought this technical question might seem: For example, why would a relatively complex automaton such as a human be deemed to hold consciousness and a relatively simple automaton such as a thermostat not be? [Let’s not do the “because one has a brain” argument: remember, for one example, that to you strong AI is possible and such would not have the organic component of a brain.]


Please be a little careful, though . . . technically speaking, this is a slippery slope into ubiquitous animism wherein all insentient objects will be technically deemed as potentially “conscious”. [I’ll keep epistemological issues with the Turning machine argument out of it for now.]


[Also . . . please keep in mind that in real life applications there will most often also be found a dichotomy between something deemed "alive" and something deemed "non-living".]

Flag Faustus5 July 29, 2012 11:52 AM EDT

Jul 29, 2012 -- 11:44AM, Mesothet wrote:

I’m trying to presume the best and not interpret all this as a big smoke-screen to cloud the pivotal issue with needless intellectualized complications.


It's called "Trying to be Accommodating".


Jul 29, 2012 -- 10:53AM, Mesothet wrote:

If, when technically approached, a human indeed “IS” an automaton, how then do you approach the very issue of consciousness in an impartial manner?


I don't see how the issues are related or make any kind of difference.


Jul 29, 2012 -- 10:53AM, Mesothet wrote:

Absurd thought this technical question might seem: For example, why would a relatively complex automaton such as a human be deemed to hold consciousness and a relatively simple automaton such as a thermostat not be?


Because I'm starting with human beings as the best example of beings we know are conscious. A thermostat lacks so many of the processes in us which give rise to consciousness that I don't see how the concept could sensibly apply to one.


Remember, the methodology goes like this: start with the only beings we know for sure are conscious (humans), find out what is going on in them that makes them conscious, and then only grant that a different being is conscious if one can find a suitably similar process going on in them. The thermostat fails that test.

Flag Mesothet July 29, 2012 12:07 PM EDT

 


It appears that, here, less is more:


------------


To keep this as simple as possible: According to wiktionary:


To be conscious “IS” to be aware.


Consciousness “IS” the state of being aware.


[Or do you hold a definition of consciousness that deviates from such rather ubiquitous common sense notion? ]


--------------


Taking things one step at a time,


-- How can *any* automaton be conscious? 

Flag Faustus5 July 29, 2012 1:08 PM EDT

Jul 29, 2012 -- 12:07PM, Mesothet wrote:

Or do you hold a definition of consciousness that deviates from such rather ubiquitous common sense notion? ]


No, consciousness as awareness is fine for now.


Jul 29, 2012 -- 12:07PM, Mesothet wrote:

Taking things one step at a time,


-- How can *any* automaton be conscious? 



As I suggested earlier, I think the word "automaton" is best used so that nothing conscious would count as an automaton.

Flag stardustpilgrim July 29, 2012 1:19 PM EDT

Jul 29, 2012 -- 8:02AM, Faustus5 wrote:


Jul 28, 2012 -- 4:04PM, JCarlin wrote:

I would suggest that the functional equivalent of language, that is the ability to meaningfully convey abstract concepts would expand consciousness to those animals that can meaningfully communicate these concepts to each other.  Canine body language is clearly understood by those who care to make the effort, and can convey abstract concepts like  "Lets play" or intricate status levels. 


Language is certainly the deal-maker that permits symbolic abstractions, but I am not sure consciousness in a social sense can be restricted to humans.


Yeah, in the end it is a matter of definition. The point that Dennett seems to be making is that language in humans is such a quantum leap above the representational capabilities of other animals that we should restrict the concept of consciousness to only those beings that have what we have.


Yesterday a study discussed on public radio's excellent "Radiolab" program reinforced this point for me, and now I'm really not sitting on the fence anymore: I'm more convinced than ever that the ability of language to create abstractions is even more powerful and essential to consciousness than I had thought previously.


You would think (or I would have before yesterday) that the abstractions that language would create for us would be around things like "tomorrow", "borrow", "promise" and other like concepts. But colors? No. Inutitively, one would think  that one sees all the colors that exist given your optical capabilities and that's that.


Well, it ain't so simple.


In this study, members of a technologically primitive tribe who did not have a word for the color "blue" were shown a dozen color squares. They were asked to point to the color that was different than the others. The matrix of color squares consisted of various shades of green and one blue square. Amazingly, they were not able to differentiate blue from green. To them, all the squares were the same color.


Now, no one is suggesting that at some raw, purely neurological level, their brains were not capable of registering the fact that the wavelengths from the blue square were different from the green squares.


But apparently, downstream from that purely raw level, where you eventually get to conscious experience, language shapes your experience in ways that are even more fundamental than I had ever imagined. (Trivia: in every language studied so far, blue is always the last color word created. You don't find it in Homer's Greek, for instance.)


I'm reminded by the thesis of that book I mentioned to you last year, The Origins of Consciousness in the Breakdown of the Bicameral Mind. The author argues, on the basis of the analysis of the language in ancient texts, that humans in ancient civilizations weren't fully conscious in the modern sense, even when they had both language and writing. He notes that between the Iliad and the Odyssey, in the former, decisions come from the gods and no one seems to have private thoughts, but in the latter, characters make up their own minds and are capable of deceiving others.


That's going too far, since we know some animals are capable of deceiving others and therefore have a "theory of mind", but it is intriguing to wonder how much the introduction of new concepts makes us radically different than our ancestors or civilizations that don't have the same vocabulary.




I read somewhere recently (within the last week, I'll try to backtrack where) the difference between green and blue. Some colors have a genetic basis and some (later differentiated colors, your blue) have their basis in learning or abstraction. ......I seem to recall the difference had something to do with the light spectrum (red on one end and blue on the other end). 


............


I think we have to make a distinction between consciousness and self-consciousness.


Animals are conscious, but essentially live timelessly, instinct is more important than memory, and therefore are not self-conscious.


Humans, OTOH, live through a self, the primary constitutent being memory through abstraction. Self as memory creates the movement of (psychological) time, memory causes self to be perpetuated. IOW, self as memory creates self-consciousness.


The paradox here is that although self-as-memory creates self-consciousness, the more one lives through memory, the more 'mechanical' one lives, IOW, the less conscious. And alternatively, the more one lives in the present moment (absent psychological self-as-memory, through abstraction) the more self-conscious one actually is (IOW, the more conscious one is).


All this can be discovered, for oneself, by a little experimentation (not introspection).


The key is to live on the razor's edge between self-consciousness-as-memory/abstraction, and learning, which is movement from the known into the unknown. If we do not live on this razor's edge, we become predictable, stuffy, mechanical, and thus (will become) tired of our own selves.


sdp       

Flag Mesothet July 29, 2012 2:28 PM EDT

Faustus5,


Cheers for the quick reply.


As some background to—what may be—alternative perspectives to that of physicalism (not that I claim full/absolute support for any):


en.wikipedia.org/wiki/New_mysterianism: “New Mysterianism”, the position that the so called “hard problem to consciousness” cannot be resolved by humans. Yes, I’m aware that Dennett wholeheartedly disagrees with this perspective; then again, I presume none of us view him (or any other human) as a demigod.  


en.wikipedia.org/wiki/Autopoiesis: “Autopoiesis”, a position that attempts to distinguish the properties of “life” from “non-life”.


en.wikipedia.org/wiki/Biocentrism_(cosmology): “Biocentrism”, a position that life and biology are central to reality, being, and the cosmos. Note: Dennett’s principle critique of this position is only that it fails (in a rather extreme way) to address what consciousness, then, might be.


These are only some examples provided to address the issue that not all alternatives to physicalism are pivoted around the rather religiously derived (and, to me, utterly nonsensical) western notion of Cartesian Dualism. Also, though none such perspectives are currently anywhere near to being commonly accepted in empirical sciences (they all in their own way challenge the current confirmation bias of physicalism [or, at times, substance dualism]), they are none-fringe enough to be upheld on wikipedia; i.e., some degree of within-group consensus to most such positions/perspectives does exist.  


It doesn't mention one as such, but it doesn't need to, as in several passages a "perceiver" or "chooser" is posited who is somehow, mysteriously, independent of the "voting" coalitions of neurons. Even if you reject dualism, it is hard to get rid of the metaphors and habits of thought that dualism endowed us with.


So as to clarify, “voting” coalitions of neurons (each individual neuron itself being a living thing acquiring sustenance and holding valency toward stimuli within a “community” of like-neurons) produce non-physical-mass-endowed “forces” (for current lack of a better term) which ultimately result into a fully subjective, first-person awareness, hence consciousness, of “stuff”.


Now—as uncommon as this perspective may be—what is there to mandatorily necessitate that these “voting” coalitions of neurons will in and of themselves be automata?


An individual neuron will be able to survive on its own in a pastry dish given sufficient nourishment—thereby indicating that individual neurons are in no way reliant upon the web of neurons to be found in any CNS. This empirical data, then, can well lend credence to the perspective that—in some ways maybe like ameba—individual neurons (which, as we currently know, can well search for new stimuli in the form of functional synaptic connections via at least dendritic growth) can well be construed to be self-regulating living beings.  [OK, we hold different confirmation biases—but, other than ready-made physicalism-based conclusions, do you know of any empirical evidence to demonstrate that an individual neuron is not a “living being”? Hence, something living that is then endowed with capacity to “perceive”: e.g. a “stimulating” synaptic connection.]


Also, there are too many examples to fully list of what will be relative to the conscious-self “unconscious” awareness of environmental stimuli. This, then, will readily imply that the unconscious too will be quite able to “perceive”.


----------------


Before furthering the possibility of such perspective, I’d also like to come to another common consensus as to the following:


Since it currently appears to me by the previous posts that we agree to consciousness = awareness, do we disagree that awareness could only occur via some form of perception?


[Here, not using the typical physicalism model of perception wherein perception will mandatorily be that which is gathered strictly via the known physical senses. For instance, introspection (e.g. memory awareness) will not conform to such currently common understanding of perception. Likewise, such physicalist model of perception will ultimately be found to retroactively denounce the possibility that any unicellular organism will in any way be capable of any form of perception, aka (to me) awareness of context.]


To then simplify the question, if awareness can be possible devoid of any perception, please offer an explanation of what perception-devoid awareness would signify—be it either theoretical or from personal experience, etc.

Flag Faustus5 July 29, 2012 3:15 PM EDT

Jul 29, 2012 -- 2:28PM, Mesothet wrote:

en.wikipedia.org/wiki/New_mysterianism: “New Mysterianism”, the position that the so called “hard problem to consciousness” cannot be resolved by humans.


"New Mysterianism" is, to me, nothing more than giving up.


Jul 29, 2012 -- 2:28PM, Mesothet wrote:

So as to clarify, “voting” coalitions of neurons (each individual neuron itself being a living thing acquiring sustenance and holding valency toward stimuli within a “community” of like-neurons) produce non-physical-mass-endowed “forces” (for current lack of a better term) which ultimately result into a fully subjective, first-person awareness, hence consciousness, of “stuff”.


Um, no. There is nothing non-physical involved. There are no "forces".


In ways that are still incompletely understood, when neuronal networks are active, the pattern and timing of their firing acts as a code which represents something or processes a representation. I don't want to push the computer metaphor too far because brains aren't very much like computers, but there are no more "non-physical" "forces" in the brain doing your thinking than there are inside the CPU of a computer.  The large scale effects are all the result of tiny mechanical processes, whether neurons firing or bits changing charge.


Jul 29, 2012 -- 2:28PM, Mesothet wrote:

Now—as uncommon as this perspective may be—what is there to mandatorily necessitate that these “voting” coalitions of neurons will in and of themselves be automata?


Because even if the way the brain codes is incompletely understood, the chemistry of what causes an individual neuron to fire is very well understood, and while the biochemistry is more complicated than the feedback and switching mechanisms of a thermostat, it is just as automatic and mindless.


Jul 29, 2012 -- 2:28PM, Mesothet wrote:

An individual neuron will be able to survive on its own in a pastry dish given sufficient nourishment—thereby indicating that individual neurons are in no way reliant upon the web of neurons to be found in any CNS.


Well, in the brain if a neuron doesn't reliably get signals from other neurons it will die. That's kind of important.


Jul 29, 2012 -- 2:28PM, Mesothet wrote:

This empirical data, then, can well lend credence to the perspective that—in some ways maybe like ameba—individual neurons (which, as we currently know, can well search for new stimuli in the form of functional synaptic connections via at least dendritic growth. . .


I've never read anything about individual neurons "searching" for new stimuli. Maybe a different choice of words would be appropriate for whatever it is you are talking about that "we currently know".


Jul 29, 2012 -- 2:28PM, Mesothet wrote:

[OK, we hold different confirmation biases—but, other than ready-made physicalism-based conclusions, do you know of any empirical evidence to demonstrate that an individual neuron is not a “living being”? Hence, something living that is then endowed with capacity to “perceive”: e.g. a “stimulating” synaptic connection.]


Neurons are alive, but I think you are pushing language to the bounds of sensibility to call them "beings" or to say they "perceive". Such attributions are true only in a very exaggerated, metaphorical sense.


Jul 29, 2012 -- 2:28PM, Mesothet wrote:

Also, there are too many examples to fully list of what will be relative to the conscious-self “unconscious” awareness of environmental stimuli. This, then, will readily imply that the unconscious too will be quite able to “perceive”.


Yes, there just happens to be a book right by my computer screen that has a paper in it about perception happening without awareness.


Jul 29, 2012 -- 2:28PM, Mesothet wrote:

Since it currently appears to me by the previous posts that we agree to consciousness = awareness, do we disagree that awareness could only occur via some form of perception?


So long as we allow memory recall to be a form of perception, I suppose yes.


Jul 29, 2012 -- 2:28PM, Mesothet wrote:

Likewise, such physicalist model of perception will ultimately be found to retroactively denounce the possibility that any unicellular organism will in any way be capable of any form of perception, aka (to me) awareness of context.


Because attributing perception to a single cell or even a very, very tiny organism seems to be metaphorically true only. The information I am gathering by looking at a flower is many magnitudes more informationally complex than what a single celled organism is taking it when internal mechanisms react to gradient flow, for instance. Let's only call what I do perception and use other terms for what a single cell animal does. There's a huge difference and that difference should be recognized, not papered over.

Flag Mesothet July 29, 2012 3:35 PM EDT

Yes, there just happens to be a book right by my computer screen that has a paper in it about perception happening without awareness.


The pivitally addressed question was quite the opposite: Can "awareness" happen without "perception"?


Nevertheless, I am quite interested to understand by what technical definithin "perception" can happen without "awareness". 


To save the trouble of yet another post: How then—as one example—does a rock which is broken into two by a significant hit from a hammer not “perceive” the information received from the hammer? I.e., by such a definition of perception, does everything in existence (sentient and insentient) then perceive? 


And, again, please reply to the following (cut and pate from previous post):


"To then simplify the question, if awareness can be possible devoid of any perception, please offer an explanation of what perception-devoid awareness would signify—be it either theoretical or form personal experience, etc."

Flag Faustus5 July 29, 2012 3:51 PM EDT

Jul 29, 2012 -- 3:35PM, Mesothet wrote:

The pivitally addressed question was quite the opposite: Can "awareness" happen without "perception"?


Yeah, but not the passage I responded to. I was agreeing with you.


Jul 29, 2012 -- 3:35PM, Mesothet wrote:

Nevertheless, I am quite interested to understand by what technical definithin "perception" can happen without "awareness".


The authors were discussing experiments in which the subject had no conscious awareness of a stimuli they had been given, but whose brains obviously had processed the stimuli to a significant degree. I can go into more detail if you want to know them, but that's the upshot.


Jul 29, 2012 -- 3:35PM, Mesothet wrote:

To save the trouble of yet another post: How then—as one example—does a rock which is broken into two by a significant hit from a hammer not “perceive” the information received from the hammer? I.e., by such a definition of perception, does everything in existence (sentient and insentient) then perceive?


The burden goes in the other direction, I'm afraid: why in the world would you want to use the word "perceive" that way? What justifies it? If everything has property X, then the word for property X loses all meaning because it no longer picks out anything unique.


Jul 29, 2012 -- 3:35PM, Mesothet wrote:

And, again, please reply to the following (cut and pate from previous post):


"To then simplify the question, if awareness can be possible devoid of any perception, please offer an explanation of what perception-devoid awareness would signify—be it either theoretical or form personal experience, etc."



I thought some of what I said in my previous post implied that I can't make sense of awareness without perception. Remember, I count memory recall and acts of imagination as instances of perception.

Flag Mesothet July 29, 2012 7:43 PM EDT

Since all else seems relatively tangential to—and dependent upon—some sort of common consensus regarding what we understand by consciousness (which, after all, is the ultimate point of this thread):


There appears to be a discrepancy between post #64 (e.g. “[ . . . ] you could if you wanted say that this means we are automata as well [ . . . ]”) and post #68 ( consciousness is agreed to be awareness &  automata are not conscious [hence do not hold awareness; hence, I take the common conclusion to be, humans are therefore not automata]).


Please clarify this. Do you maintain that humans A) are automata, or B) are not automata? If “other” then, again, please clarify your current stance.


I’ll here be presumptuous and assume that—given your latest posts—we actually do hold agreement that humans are not automata on account of automata not being endowed with awareness and humans being endowed with awareness.


Then, please expresses disagreements with this premise if they exist:


The ubiquitous aspect of all consciousness is awareness and the ubiquitous aspect of all awareness is perception. [This seems to be in line with post #73, “[ . . . ]  I can't make sense of awareness without perception [. . .]”]


If no disagreements are found in the above premise, then do we agree that perception will be the core minimal requirement to any consciousness’s existence?


[It’s hard to make heads and tails when it’s not yet explicitly clear what the disagreements are, if any.]


-----------------------


The burden goes in the other direction, I'm afraid: why in the world would you want to use the word "perceive" that way? What justifies it? If everything has property X, then the word for property X loses all meaning because it no longer picks out anything unique.


Ah, I’m well aware of unconscious awareness of stimuli as regards the human psyche—as previously expressed by me. As to the question placed of perception devoid of awareness:


Uncertain of what you were referring to yet being aware of your affinity to computer models/metaphor of cognition, I asked my questions as regards such issues as computer vision (e.g. en.wikipedia.org/wiki/Computer_vision). You’ll notice that if computer vision (or smell, etc.) is established to be a form of “perception without awareness”, then the very same may be asked of any other insentient object: does it then also “perceive” information that it may be affected by? Hence, I fully agree with you: the term/concept of perception cannot logically apply to anything that is merely “causally effected by inbound information”.


Paradoxically, however, (if we are agreed that not all objects perceive) perception will then take on a rather significant meaning.


For example, here are two premises not at all discordant to common sense:


1) Sentience [from wiktionary: The state or quality of being sentient; possession of consciousness or sensory awareness] will equate to capacity to perceive.


2) All life will be sentient.


Do we hold agreement on these two premises?


 

Flag JCarlin July 30, 2012 1:41 AM EDT

Jul 29, 2012 -- 8:02AM, Faustus5 wrote:

Yeah, in the end it is a matter of definition. The point that Dennett seems to be making is that language in humans is such a quantum leap above the representational capabilities of other animals that we should restrict the concept of consciousness to only those beings that have what we have.


The circularity is impressive.  It does not help to define or understand consciousness.  I would argue that it is mythmaking that distinguishes humans from other species not language.  Certainly language is necessary for mythmaking, but I would not equate mythmaking with consciousness.


Consciousness is awareness of self in relation to others.  It is a real time phenomenon. I am conscious now.  I am as a human also conscious now of the myths of my society,  that takes advanced conceptual language.  But consciousness of myth and incorporating it into behavior is an ongoing real time phenomenon.  It is that real time phenomenon that is consciousness.  Certainly a dog's consciousness has a shorter history, and is concerned with short term rewards, but the dog is aware of self and herm relationship to humans and other animals and adjusts current behavior accordingly. 

Flag newchurchguy July 30, 2012 9:03 AM EDT

Jul 29, 2012 -- 3:15PM, Faustus5 wrote:


Um, no. There is nothing non-physical involved. There are no "forces".


The large scale effects are all the result of tiny mechanical processes, whether neurons firing or bits changing charge.





I dislike conflating the units of measure of physics and chemistry - with those defined by information science.  Even in quote marks.  That said, I perfectly understand Meso's claim of "forces" as causal pathways other than mechanistic causation.


However - saying bits (a unit of measure in science) have electrical charges is an example of speaking about science without ANY background in the subject at hand. A bit is a measure of a reduction of uncertainty.  It is logical in nature and described by the Mathematical Theory of Communication.

Flag Mesothet July 30, 2012 11:09 AM EDT

Jul 30, 2012 -- 9:03AM, newchurchguy wrote:


Jul 29, 2012 -- 3:15PM, Faustus5 wrote:


Um, no. There is nothing non-physical involved. There are no "forces".


The large scale effects are all the result of tiny mechanical processes, whether neurons firing or bits changing charge.





I dislike conflating the units of measure of physics and chemistry - with those defined by information science.  Even in quote marks.  That said, I perfectly understand Meso's claim of "forces" as causal pathways other than mechanistic causation.


However - saying bits (a unit of measure in science) have electrical charges is an example of speaking about science without ANY background in the subject at hand. A bit is a measure of a reduction of uncertainty.  It is logical in nature and described by the Mathematical Theory of Communication.




Thank you for the correction—and, “forces” was initially immediately followed with “(for current lack of a better term)” with good reason.


My trouble, here, is the very semantics to physicalism: e.g., since nothing is deemed to be non-physical, the concept of a concrete object is then assumed to ultimately be just as physical as the concrete object itself. This linguistic—if not conceptual—attempt to monopolize all that may be existent (be it subjective or otherwise) posses quite a challenge in differentiating different forms of reality. As concerns the issue specifically addressed, a web of neurons which fire together will produce a (what can I/we call this? Adequate terms seem to currently be missing.) . . . let’s now say, a gestalt form of reality: one that is simultaneously composed of its parts (negentropic neurons—for they are not entropic) and one which to some extent goads/limits/forms the processes of the parts from which it is composed. Compare via analogy to a “culture” composed from humans. Yes, yes, a culture is fully physical as well, objectively measurable and all, were one so inclined to stipulate on account of the physicalism worldview. But the point is, as with a culture that fully depends upon individual humans for its being and which in turn goads/limits/forms individual humans from the time of birth, so too may be allegorically stated of “voting” coalitions of neurons. [As has been previously touched upon in part (cf. post #71 “Well, in the brain if a neuron doesn't reliably get signals from other neurons it will die.” –although, given the reality of neural plasticity, neurons are also known to “adapt” (non-evo.-sense) to environment)], the gestalt reality formed by the given coalition of neurons that concurrently fire together will need to be to some extent “functional” for the given set of neurons to maintain their synaptic connections. All this is highly generalized and colored by semantics, yet generally holds true given today’s empirical data.]


So . . . any help in this department would be welcomed. How would one best term this “gestalt reality” created by individual neurons firing together? Note: it will be these gestalt realties which will ultimately converge into thee gestalt reality of first-person consciousness within CNS endowed organisms (humans if no other).


------------


Nevertheless, a rational/logical/coherent reply to post #74 seems pivotal—at least to me—to any further earnest attempts at better understanding consciousness per se. 


Flag Faustus5 July 30, 2012 6:47 PM EDT

Jul 29, 2012 -- 7:43PM, Mesothet wrote:

There appears to be a discrepancy between post #64 (e.g. “[ . . . ] you could if you wanted say that this means we are automata as well [ . . . ]”) and post #68 ( consciousness is agreed to be awareness &  automata are not conscious [hence do not hold awareness; hence, I take the common conclusion to be, humans are therefore not automata]).


There's no discrepancy, there's just me acknowledging that for some people, if they have to admit that they are made of automata, that implies (for them) that they must also be automata. Since I've seen this move made in previous discussions with others over the years, I thought it would be bad for me to beg the question against you on that score. So I left it open in case you wanted to go in that direction.


Jul 29, 2012 -- 7:43PM, Mesothet wrote:

Please clarify this. Do you maintain that humans A) are automata, or B) are not automata? If “other” then, again, please clarify your current stance.



I'm a pragmatist philosopher in the tradition of Richard Rorty, so my inclination isn't going to be to declare "No, there's no way people are automatons" but rather "I think it is best if we reserve that term for simple, non-human entities so the word preserves it's function of contrasting those entities with us."


Jul 29, 2012 -- 7:43PM, Mesothet wrote:

Then, please expresses disagreements with this premise if they exist:


The ubiquitous aspect of all consciousness is awareness and the ubiquitous aspect of all awareness is perception. [This seems to be in line with post #73, “[ . . . ]  I can't make sense of awareness without perception [. . .]”]


I would only add a friendly amendment, to the effect that I would prefer to reserve terms like "awareness" for beings with complex nervous systems so that when they perceive something, they can understand and categorize it to some degree. A hungry frog will automatically try to catch any small, dark, airborne object within its forward visual arc, whether it be a fly or a lead pellet. It perceives, but it doesn't have the capacity to be aware of what it is doing. It is more of an automaton. So in this sense a thermostat can be granted the perception that "it is too cold" (in some metaphorical sense) but not awareness.


Jul 29, 2012 -- 7:43PM, Mesothet wrote:

Hence, I fully agree with you: the term/concept of perception cannot logically apply to anything that is merely “causally effected by inbound information”.


I wouldn't say it is a matter of logic so much as usefulness, though it would be a matter of logic if a good dictionary definition countered the way one wanted to use the word. The more stupid and simple any agent under discussion is, the more metaphorical intentional idioms become, to the point where their application even as metaphor starts to look silly, as the case with rock and the hammer.


Jul 29, 2012 -- 7:43PM, Mesothet wrote:

1) Sentience [from wiktionary: The state or quality of being sentient; possession of consciousness or sensory awareness] will equate to capacity to perceive.


Instead of "equate to" why can't we just say "must involve the"?


Jul 29, 2012 -- 7:43PM, Mesothet wrote:

2) All life will be sentient.


Do we hold agreement on these two premises?


Certainly not #2. If all life is sentient then the word no longer picks out what is special and different about us and other animals with higher cognitive functions. And that, I think, is part of the point of having concepts like that in the first place.

Flag Faustus5 July 30, 2012 7:07 PM EDT

Jul 30, 2012 -- 1:41AM, JCarlin wrote:

The circularity is impressive.  It does not help to define or understand consciousness.


There is no circularity, and yes it does help as a methodology.


Since consciousness is such a slippery and hard to study subject--since both philosophers' and non-philosophers' minds are filled with treacherously misleading and theory-laden intuitions about it which too often are turned into starting assumptions that "everyone knows to be obviously true"--you have to be especially vigilant. We only know for sure in our own cases that we are conscious, so to be careful and properly scientific, we need to start with our minds first, find out what is going on in us that makes us conscious, and then only grant that other beings are conscious when we can reliably measure something suitably, significantly similar going on in them, too.


And if the programming that language causes in our higher processing streams is so significant that it actually makes the difference between telling green from blue (!!!), then language is absolutely essential to making our consciousness what it is.

Flag Mesothet July 30, 2012 10:44 PM EDT

Faustus5,


We are currently at the following impasse:


If all life will ultimately and technically be deemed automata, then there will be no logical means of differentiating between that which can perceive and that which can not perceive.


IF perception is simplistically defined as “ability to be affected by inbound information and react to such”, then (if you prefer) a billiard ball hit by another will, technically, be fully included within the class of entities endowed with perception--as will all others. 


As was previously mentioned, this utter lack of qualitative differentiation between animate and inanimate entities (or however else one chooses to linguistically address the two) will directly lead into a dire-straits paradox of no-holds-bar animism; one wherein, as you’ve previously stated, statement X (in this case “perception”) will become utterly meaningless. 

Flag farragut July 30, 2012 10:51 PM EDT

". A hungry frog will automatically try to catch any small, dark, airborne object within its forward visual arc, whether it be a fly or a lead pellet. It perceives, but it doesn't have the capacity to be aware of what it is doing. It is more of an automaton."


 


I'm not sure that it's quite that simple. A few weeks ago, I sat quietly at the pond watching the fish and other denizens. A frog was hunkered down amongst the lily pads awaiting an opportunity. A wasp came buzzing over quite closely, a few inches. The frog cautiously retreated back to being nearly hidden until the wasp left.  Hardly a minute later there buzzed by a quite harmless, and I presume tasty, bug that the frog grabbed as quick as a wink.


He does discriminate. He is not strictly an automaton. Up to our standards? I suppose not, but clever enough to survive.

Flag stardustpilgrim August 1, 2012 9:39 PM EDT

 ...Note, below, highlighted part was from a program about the senses on the Science channel, repeated tonight, Wed. night.


Jul 29, 2012 -- 1:19PM, stardustpilgrim wrote:


Jul 29, 2012 -- 8:02AM, Faustus5 wrote:


Jul 28, 2012 -- 4:04PM, JCarlin wrote:

I would suggest that the functional equivalent of language, that is the ability to meaningfully convey abstract concepts would expand consciousness to those animals that can meaningfully communicate these concepts to each other.  Canine body language is clearly understood by those who care to make the effort, and can convey abstract concepts like  "Lets play" or intricate status levels. 


Language is certainly the deal-maker that permits symbolic abstractions, but I am not sure consciousness in a social sense can be restricted to humans.


Yeah, in the end it is a matter of definition. The point that Dennett seems to be making is that language in humans is such a quantum leap above the representational capabilities of other animals that we should restrict the concept of consciousness to only those beings that have what we have.


Yesterday a study discussed on public radio's excellent "Radiolab" program reinforced this point for me, and now I'm really not sitting on the fence anymore: I'm more convinced than ever that the ability of language to create abstractions is even more powerful and essential to consciousness than I had thought previously.


You would think (or I would have before yesterday) that the abstractions that language would create for us would be around things like "tomorrow", "borrow", "promise" and other like concepts. But colors? No. Inutitively, one would think  that one sees all the colors that exist given your optical capabilities and that's that.


Well, it ain't so simple.


In this study, members of a technologically primitive tribe who did not have a word for the color "blue" were shown a dozen color squares. They were asked to point to the color that was different than the others. The matrix of color squares consisted of various shades of green and one blue square. Amazingly, they were not able to differentiate blue from green. To them, all the squares were the same color.


Now, no one is suggesting that at some raw, purely neurological level, their brains were not capable of registering the fact that the wavelengths from the blue square were different from the green squares.


But apparently, downstream from that purely raw level, where you eventually get to conscious experience, language shapes your experience in ways that are even more fundamental than I had ever imagined. (Trivia: in every language studied so far, blue is always the last color word created. You don't find it in Homer's Greek, for instance.)


I'm reminded by the thesis of that book I mentioned to you last year, The Origins of Consciousness in the Breakdown of the Bicameral Mind. The author argues, on the basis of the analysis of the language in ancient texts, that humans in ancient civilizations weren't fully conscious in the modern sense, even when they had both language and writing. He notes that between the Iliad and the Odyssey, in the former, decisions come from the gods and no one seems to have private thoughts, but in the latter, characters make up their own minds and are capable of deceiving others.


That's going too far, since we know some animals are capable of deceiving others and therefore have a "theory of mind", but it is intriguing to wonder how much the introduction of new concepts makes us radically different than our ancestors or civilizations that don't have the same vocabulary.




I read somewhere recently (within the last week, I'll try to backtrack where) the difference between green and blue. Some colors have a genetic basis and some (later differentiated colors, your blue) have their basis in learning or abstraction. ......I seem to recall the difference had something to do with the light spectrum (red on one end and blue on the other end). 


............


I think we have to make a distinction between consciousness and self-consciousness.


Animals are conscious, but essentially live timelessly, instinct is more important than memory, and therefore are not self-conscious.


Humans, OTOH, live through a self, the primary constitutent being memory through abstraction. Self as memory creates the movement of (psychological) time, memory causes self to be perpetuated. IOW, self as memory creates self-consciousness.


The paradox here is that although self-as-memory creates self-consciousness, the more one lives through memory, the more 'mechanical' one lives, IOW, the less conscious. And alternatively, the more one lives in the present moment (absent psychological self-as-memory, through abstraction) the more self-conscious one actually is (IOW, the more conscious one is).


All this can be discovered, for oneself, by a little experimentation (not introspection).


The key is to live on the razor's edge between self-consciousness-as-memory/abstraction, and learning, which is movement from the known into the unknown. If we do not live on this razor's edge, we become predictable, stuffy, mechanical, and thus (will become) tired of our own selves.


sdp       





Flag Mesothet August 2, 2012 11:46 AM EDT

Didn’t get around to seeing the episode, yet at least.


read somewhere recently (within the last week, I'll try to backtrack where) the difference between green and blue. Some colors have a genetic basis and some (later differentiated colors, your blue) have their basis in learning or abstraction. ......I seem to recall the difference had something to do with the light spectrum (red on one end and blue on the other end). 


A Vietnamese-American once told me that in Vietnamese there is no separate word for green or blue: one word apparently addresses both.


From long ago anthropological studies I also seem to recall that—given that different languages/ethnicities have different linguistic means of addressing colors—the minimum “colors” a known language/culture may have will be the two of black/dark & white/bright; where there is a third color, it will always be red; where there are four colors, the forth will be either blue or green; the rest I remember as becoming more complicated. Still, in a way, this kinda reminds me of studies in which global universality amongst differing human cultures was found between very basic sounds (of pitch, etc.) and emotional recognition. While I’m at it, also the reality of there being a universal “basic” non-verbal communication (facial expressions) for basic emotions that then becomes modified/exaggerated/etc. within diverse cultures—sometimes to the point that they convey vastly different information when assessed between cultures. (The same has also been found true within different “cultures” of greater ape: different chimpanzee “tribes/cohorts” will use and pass down generation to generation different means of conveying info via facial expressions.) [Don’t have handy references for all these, so take them as you will.]


There’s a lot more to all this than can be easily discerned imo. Taking the example provided, for example, how would the typical Vietnamese perceive the objective world relative to the typical American when it comes to colors? As a Western blue/green color blind person would?


Another interesting tidbit will be that magenta will in no way be part of “physical reality”, it isn’t present in the electromagnetic spectrum: it is a fully “mental” color, existing as commonly-shared reality at least amongst (portions of?) the human species.


All very interesting stuff, I agree.


I’ll try to reply to the consciousness/self-consciousness dichotomy in a while . . .

Flag Faustus5 August 2, 2012 4:13 PM EDT

Jul 30, 2012 -- 10:44PM, Mesothet wrote:

If all life will ultimately and technically be deemed automata, then there will be no logical means of differentiating between that which can perceive and that which can not perceive.


No, it will mean that we are probably misusing the word "automata". If we must grant that all living beings are automatons in some extended sense, we aren't thereby forbidden from creating criteria that distinguish acts of perception from acts that don't count as perception.


Jul 30, 2012 -- 10:44PM, Mesothet wrote:

IF perception is simplistically defined as “ability to be affected by inbound information and react to such”, then (if you prefer) a billiard ball hit by another will, technically, be fully included within the class of entities endowed with perception--as will all others.


Then maybe that's a bad definition. Maybe we need to extend it to something like “ability to be affected by inbound information and react to such in a way that is goal directed". How does that work as an improvement?

Flag Faustus5 August 2, 2012 4:20 PM EDT

Jul 30, 2012 -- 10:51PM, farragut wrote:

I'm not sure that it's quite that simple. A few weeks ago, I sat quietly at the pond watching the fish and other denizens. A frog was hunkered down amongst the lily pads awaiting an opportunity. A wasp came buzzing over quite closely, a few inches. The frog cautiously retreated back to being nearly hidden until the wasp left.  Hardly a minute later there buzzed by a quite harmless, and I presume tasty, bug that the frog grabbed as quick as a wink.


He does discriminate. He is not strictly an automaton. Up to our standards? I suppose not, but clever enough to survive.


All I can say is that the paper where I read about this experiment did describe the frogs mindlessly trying to grab lead pellets as well as food pellets and flies. Then they worked out which parts of its brain were governing the behavior, and what they found pretty much operated on the basis of some sort of if/then gate functionality, biologically instantiated.


It could be that what they call the "preferred stimulus" for the part of the brain that identified something as "food" was keyed not just to "small and dark and moving in my visual field" but also to the manner of its flight. Wasps don't typically move the way flies do.


Or it could be that actually, the experiment I'm recalling was on TOADS rather than FROGS, and maybe toads are more dense. I am pretty sure I misidentified the subject as frogs when they were really looking at toads.

Flag Mesothet August 2, 2012 11:58 PM EDT

I’m quite aware that my views of perception are not mainstream, so reservations are to be quite expected. [One will note that mainstream views of perception will be those founded upon physicalism, cf. en.wikipedia.org/wiki/Perception. Such views will also be to some extent self-contradictory: e.g. If perception per se will be attained from sensory information—and “sensory” will signify some form of physical sense—then such a thing as perception via the “mind’s eye” (or via the mind’s ear, tongue, nose, skin, etc.) will then be, technically, impossible. However, as previously agreed upon, such a thing as “perception of memory” (which does at times involve the “mind’s eye” (etc.) will occur.]


And, as can be found near-abouts(?), there is the adequate assertion that extraordinary claims require extraordinary evidence. As pertains to assessments of the objective world, in this case of biology, agreed.


Unless this may be overlooked, it’s in our nature to hold some degree of anthropocentrism as regards such things as the nature of consciousness. All that this means, however, will be that we each hold the presumption that “all others will perceive, and thereby inhabit, the very same “subjective” realities which I myself perceive/inhabit”—basically, then, anthropocentrism will be nothing more than a communal form of egocentricsm, and even here one will often tend to assume that all others of the same “label” will be of like reality to oneself. The details to any cohort will most typically evidence this presumption to be wrong.


That stated, I would like to propose some means of finding a more objective, non-anthropocentric, and ontology-neutral basis of discerning what “consciousness” might actually be in its minimal essence. (Obviously, there will always be the possible assertions of “more conscious” and “less conscious” organisms relative to some other(s).)


From post #75: Consciousness is awareness of self in relation to others. 


 This, to me, rings true. But one can try to imagine an alien on a different planet with a different form of awareness of self; and then one can maybe more easily try to imagine a different species of organism on this planet as likewise holding a rather alien form of awareness relative to our own. (BTW, nothing necessitates that “aliens” have to be more conscious then us.)


Faustus5, you once stated that you believe consciousness is only to be found amongst organisms with a CNS. Though I don’t agree, even so, wouldn’t there be “gestalt realties” of neurons firing together (as per post # 77 – for lack of a better term) that would result in some form of “gestalt realty” that may perceive for the entire organism in ... say, an earthworm? . . . not to here bring up even simpler nervous systems such as can be found in nematodes. I will offer that any objection to such conclusion will merely be one of unsubstantiated opinion.


Technically—at a hopefully very stringent level of logic (unless logistic mistakes may be here found)—to perceive there will need to be something that is perceived. Hence, anything which may be deemed to perceive will be differentiated—hence separated—from that which it perceives. Hence, if anything “other” is perceived, technically, there will need to be some form of “subjective boundary” between perceiver and perceived. [Regardless of what one may construe as the ontological reality to “perceiver”, it currently seems to me utterly nonsensical to acknowledge perception devoid of some subject which so perceives.] This subjective boundary between that which perceives and the other relative to itself will, in itself, either be a) onmi-sentient (i.e. absolutely indiscriminately in that which it takes into account as relevant information), something that I would like to argue as an impossibility, or b) to some extent discriminatory as regards information which “it” perceives.


Now, not even humans-who we may safely assert are more exalted at the ability of awareness of self (or self-consciousness) than any other known biological organism—will typically, in a moment-by-moment sense, retain awareness of self as a conscious entity. As I’m typing this down, I’m focusing on all various forms of “other”—not on myself as conscious being. Yet, by the sheer act of perceiving and dealing with such “other” I retain a differentiation between “my-self” and such “other”: thereby yet endowing me with what is to me an utterly implicit and intrinsic “awareness of self via awareness of other”.


I’ll uphold that the same utterly implicit and intrinsic awareness of self via awareness of other will—by logical necessity—need to hold true for any entity which can perceive.


 All this might be somewhat incomplete without the following: perception itself—I will now uphold in a condensed manner—will be an impossibility devoid of the creative act of endowing information with some degree of subjective meaning. Again, at minimum, the “subjective meaning” of positive, negative, or neutral valence (cf. en.wikipedia.org/wiki/Valence_(psychology)).


 Again, on a fully logical level of contemplation, any such forms of valence will technically be impossible devoid of a ubiquitous and utterly generalized “cause” which drives the subjective perceiving entity toward . . . what I’ll linguistically address as “optimal avoidance of dolor”. [Complexities galore in terms of trying to assess optimal avoidance of dolor in terms of such things as short- and long-term time spans. This holds true even for humans, never mind a nematode. Yet please consider any logical alternative to this proposed driving cause to various types of valence—none have yet come to my mind.]


 So, to be as extreme as I can currently be, using the denotation of “ability to be affected by inbound information and react to such in a way that is goal directed”—with the ultimate goal to all pursuits being that of “optimal avoidance of dolor”—I’ll here address the reality of gametes: more specifically, the sperm. Leaving aside biologically damaged sperm, sperm will somehow fully fit this denotation of perception.


 It should be abundantly clear that I am not arguing for sperm having anything near to what we as humans contemplate as (human) consciousness. Yet sperm will perceive, discriminate (e.g. between uterus lining and the “egg”), be goal oriented, and hold some causal drive to going this-a-way rather than that-a-way. All this, then, to me clearly indicates that sperm will hold some degree and type of awareness that is rather alien to ourselves.


 Again, more interesting than sperm to me will be neurons themselves. But maybe all this is already more than a mouthful. I anticipate proper critiques.


 [To sum things up, however, the principle conclusion I’d like to uphold is that to be alive will also be to perceive; that to perceive will be to hold some degree of “awareness”, regardless of how different it may be relative to us; hence, some “degree” of consciousness. This will then allow for a natural progression of complexity of consciousness via mechanisms of biological evolution. As outliers—as such also apply to the attribute of life itself—one can contemplate both viruses and prions as entities that are in-between perceiving-things and non-perceiving-things.]

Flag newchurchguy August 3, 2012 2:04 PM EDT

Aug 2, 2012 -- 4:20PM, Faustus5 wrote:


All I can say is that the paper where I read about this experiment did describe the frogs mindlessly trying to grab lead pellets as well as food pellets and flies. Then they worked out which parts of its brain were governing the behavior, and what they found pretty much operated on the basis of some sort of if/then gate functionality, biologically instantiated.




I did a double take -- until I exploded in laughter.  There is such a functionality in Excel, or in programable logic controllers (PLC),  but using the word gate - it implies a Boolean expression.  There is no such gate.


Everytime, F5, you try to sound practical or discuss real science/technology, you present these made-up science bloopers.


The terms - stimulus and response make sense in the case of the frog.  Without knowing anything - my guess is the frog or toad uses auditory stimulus to sort stinging insects from prey. 

Flag newchurchguy August 3, 2012 2:07 PM EDT

Aug 2, 2012 -- 11:58PM, Mesothet wrote:


Technically—at a hopefully very stringent level of logic (unless logistic mistakes may be here found)—to perceive there will need to be something that is perceived. Hence, anything which may be deemed to perceive will be differentiated—hence separated—from that which it perceives. Hence, if anything “other” is perceived, technically, there will need to be some form of “subjective boundary” between perceiver and perceived. [Regardless of what one may construe as the ontological reality to “perceiver”, it currently seems to me utterly nonsensical to acknowledge perception devoid of some subject which so perceives.] This subjective boundary between that which perceives and the other relative to itself will, in itself, either be a) onmi-sentient (i.e. absolutely indiscriminately in that which it takes into account as relevant information), something that I would like to argue as an impossibility, or b) to some extent discriminatory as regards information which “it” perceives.




That was very well laid out.  In simple terms: this is the distinction of thinker and thought.

Flag Faustus5 August 4, 2012 9:08 AM EDT

Aug 2, 2012 -- 11:58PM, Mesothet wrote:

Faustus5, you once stated that you believe consciousness is only to be found amongst organisms with a CNS. Though I don’t agree, even so, wouldn’t there be “gestalt realties” of neurons firing together (as per post # 77 – for lack of a better term) that would result in some form of “gestalt realty” that may perceive for the entire organism in ... say, an earthworm? . . . not to here bring up even simpler nervous systems such as can be found in nematodes. I will offer that any objection to such conclusion will merely be one of unsubstantiated opinion.


Attributing intentional states to virtually mindless organisms is quite useful. I don't object to that. I just remind you that such states are very simple.


As for the rest of your post, I just don't understand why you think we need to essentially "start over" in the field of consciousness studies, especially when cognitive neuroscience has been so successful recently. I would urge you to first learn about and understand the prevailing consensus and then build reforms or revolutions as an alternative. If I am going to consider alternatives to successful models, I first need to have the flaws and limitations of those models articulated.

Flag Faustus5 August 4, 2012 9:14 AM EDT

Aug 3, 2012 -- 2:04PM, newchurchguy wrote:

I did a double take -- until I exploded in laughter.  There is such a functionality in Excel, or in programable logic controllers (PLC),  but using the word gate - it implies a Boolean expression.  There is no such gate.


No one cares what you think, Newchurchguy. The science is there.


Time and time again your ignorance of neurology and your bottomless arrogance in thinking you know more than anyone else around here has caused you to make a fool of yourself--recall the time you laughed at my use of the expression "connection strengths" when discussing synapses and then I cited half a dozen passages of neurologists using the same expression? This is no different


Aug 3, 2012 -- 2:04PM, newchurchguy wrote:

Everytime, F5, you try to sound practical or discuss real science/technology, you present these made-up science bloopers.


Actually, that's what you do.

Flag Faustus5 August 4, 2012 11:09 AM EDT

On the side-issue of how language processes perception, a poster in another forum send me this link to a short BBC documentary on the subject:


youtu.be/4b71rT9fU-I

Flag Mesothet August 4, 2012 12:30 PM EDT

[ . . . ] but using the word gate - it implies a Boolean expression.  There is no such gate.


I think this is intended to assert that all neurons will need to transcend a given threshold for them to fire via their axon. The assumption is that it’s all “mechanical”—thought there are more problems with this classical view with each year that passes. More on this later if needed.


That was very well laid out.  In simple terms: this is the distinction of thinker and thought.


 Cheers for the compliment. Getting one once in a while doesn’t feel bad. As to the distinction between thinker and thought, this will be dependent upon how one addresses thought itself. Perception can technically be construed as a form of thought (I’ll skip the reasons for this for now); hence, anything that perceives could well be construed as holding some degree of mind. As with thought, reasoning, etc., mind too will be a term all people will seem to take for granted but no one—and I do mean no one—has an adequate logistically sound definition for.  These things—as much as anyone would like to argue differently—all pertain to the domain of “philosophy of mind/consciousness” [Yes, such will more often than not be in accord with data from current empirical studies.] But, for example, see the difficulties to “I think therefore I am” holding sound reasoning: en.wikipedia.org/wiki/Cogito_ergo_sum#Cr... I do, however, uphold that the same difficulties cannot be found to the assertion, “I am/exist when I perceive”. Nitty gritty stuff, so I’ll skip on them for now.


---------------


If I am going to consider alternatives to successful models, I first need to have the flaws and limitations of those models articulated.


 So I take it you have yet to find faults with the argument(s) presented.


 I’ve already placed one fault with it at the very start: that of physicalist interpretations of perception not being able to account for such things as the mind’s eye as far as humans are concerned.


 As a friendly reminder, this thread is not about today’s mainstream models of neuroscience; it is about what consciousness per se is. [And no, no modern model of neuroscience has figured this out . . . not even close.]


 -- Again, do you find any logical faults with the argument presented?


 As to the prevailing consensus of neuroscience, it is a battle between quite outdated models of “reality” and attempts at somehow assimilating newly acquired empirical evidence. By your expertise I assume you are quite familiar with this “battle” in terms of neural plasticity: it took mountains of evidence to at last tip the scale toward a new model in which the brain is not hardwired in precise synaptic connections that never change from the time of birth to the time of death—or thereabouts. [This even though neuroplasticity is about the only way to make sense of such things as “new ideas, memories, etc.”. William James first came up with the concept on logical grounds . . . too bad we’ve had to wait about one 100 years to at last come to grips with it on account of empirical reality.] Now, there is the dirty little secret of what (again for current lack of a better term) I’ll term top-down processing within cognition/neuroscience: that what “I think, choose, focus on, etc.” will at times produce a physical effect upon the underlying structure of neural “architecture” which, again, is one of neural plasticity.


 But anyway, what did you think of the following argument:


 [ . . . ] perception itself [ . . . ] will be an impossibility devoid of the creative act of endowing information with some degree of subjective meaning.


 If true, then to perceive will be to simultaneously *create* information where none before existed.


 If false, then we once again wind up with “a billiard ball hit by another” being technically deemed to be a “perceiving-thing”.

Flag newchurchguy August 4, 2012 12:37 PM EDT

Aug 4, 2012 -- 9:14AM, Faustus5 wrote:


Time and time again your ignorance of neurology and your bottomless arrogance in thinking you know more than anyone else around here has caused you to make a fool of yourself--recall the time you laughed at my use of the expression "connection strengths" when discussing synapses and then I cited half a dozen passages of neurologists using the same expression? This is no different




hmmm....   I don't remember such and exchange - please reference it.  We had an exchange about representation where you made some good comments.

Flag Faustus5 August 4, 2012 1:30 PM EDT

Aug 4, 2012 -- 12:30PM, Mesothet wrote:

So I take it you have yet to find faults with the argument(s) presented.


I have responded to everything that at least resembled an argument.


The problem is that you aren't discussing the issues with reference to the actual content of modern scientific paradigms. Those paradigms have been enormously successful and I don't know why you wouldn't want to think about consciousness using them.


Aug 4, 2012 -- 12:30PM, Mesothet wrote:

I’ve already placed one fault with it at the very start: that of physicalist interpretations of perception not being able to account for such things as the mind’s eye as far as humans are concerned.


There is no obligation to deal with things that don't actually exist. Every time you describe what you mean by this concept, you are describing what is actually a mythical entity as far as I can tell.


Aug 4, 2012 -- 12:30PM, Mesothet wrote:

As a friendly reminder, this thread is not about today’s mainstream models of neuroscience; it is about what consciousness per se is. [And no, no modern model of neuroscience has figured this out . . . not even close.]


I have already described such a modern model and cited a paper which discusses it. I have yet to see anyone show me how this model is flawed or provide something that even remotely surpasses it.


Aug 4, 2012 -- 12:30PM, Mesothet wrote:

Again, do you find any logical faults with the argument presented?


No logical faults. Just faults in terms of not being scientific, relying on non-useful uses of terms, etc.


Aug 4, 2012 -- 12:30PM, Mesothet wrote:

 Now, there is the dirty little secret of what (again for current lack of a better term) I’ll term top-down processing within cognition/neuroscience: that what “I think, choose, focus on, etc.” will at times produce a physical effect upon the underlying structure of neural “architecture” which, again, is one of neural plasticity.


That's not what neural plasticity is--what you are describing is actually dualism.


When you think, choose, or focus on, that is a physical process, and repetition of that process causes physical changes in your brain that last beyond the instance of your thinking, choosing, focusing on, etc.. It isn't as if there is a separate entity that reaches into the brain and produces a physical effect. Maybe that's not what you meant to be suggesting, but that interpretation would be consistent with other statements you have made.


Aug 4, 2012 -- 12:30PM, Mesothet wrote:

 But anyway, what did you think of the following argument:


 [ . . . ] perception itself [ . . . ] will be an impossibility devoid of the creative act of endowing information with some degree of subjective meaning.




It doesn't resemble an argument so much as a vague assertion that I can't make much sense out of.


Aug 4, 2012 -- 12:30PM, Mesothet wrote:

If true, then to perceive will be to simultaneously *create* information where none before existed.


I don't know what that is supposed to mean.

Flag Faustus5 August 4, 2012 1:30 PM EDT

Aug 4, 2012 -- 12:37PM, newchurchguy wrote:


Aug 4, 2012 -- 9:14AM, Faustus5 wrote:


Time and time again your ignorance of neurology and your bottomless arrogance in thinking you know more than anyone else around here has caused you to make a fool of yourself--recall the time you laughed at my use of the expression "connection strengths" when discussing synapses and then I cited half a dozen passages of neurologists using the same expression? This is no different




hmmm....   I don't remember such and exchange - please reference it.  We had an exchange about representation where you made some good comments.


Do your own homework.

Flag farragut August 4, 2012 2:23 PM EDT

"That's not what neural plasticity is--what you are describing is actually dualism."


"When you think, choose, or focus on, that is a physical process, and repetition of that process causes physical changes in your brain that last beyond the instance of your thinking, choosing, focusing on, etc"


 


Sounds to me like the learning process.

Flag Mesothet August 4, 2012 2:38 PM EDT

For now overlooking other things and taking things one step at a time . . .


There is no obligation to deal with things that don't actually exist. Every time you describe what you mean by this concept, you are describing what is actually a mythical entity as far as I can tell.


Are you trying to imply that when a person utilizes their capacity of imagination (take Einstein’s riding on a beam of light, if this suits your fancy a bit better than the mere realities of daydream and REM dreams) that such person does not then perceive imaginary object(s) as other to him/herself? That because such realities are not objectively, tangibly, real in equal measure to one and all they then “don’t actually exist” as pertains human *consciousness*? I’ll stop short on this one.


By what *standard* do you then differentiate “science” (as it here pertains to consciousness) from outright dogma as pertains to true, absolute, reality? Take time to think about this one, please.


[Contemplate the (nonexistent?) subjective reality of psychological pain . . . that would then be experienced by whom exactly? We are, after all, dealing with the issue of consciousness here . . . Do you have any knowledge of the history behind modern cognitive sciences? Of how after William James came the little numskull of John B. Watson who proclaimed that because consciousness cannot be empirically observed per se the study of psychology has no business studying “mind” but only “observable behavior” . . . after which then became popularized the utterly non-empirical theory of Freudianism. . . it took some time before cognitive sciences came back into favor. Or the argument between Skinner (the behaviorist) and Chomsky . . . in which Chomsky won out . . . leading to today’s understanding of the “language/grammar instinct”? I will, again, here stop short. But, again, dogma where it is found (in individuals or cohorts) changes over time; the empirical data remains.]


To simplify, in which way is the “mind’s eye” an actually nonexistent reality that is actually only mythical (in this context, mythical meaning, again, “non-existent”)?


 

Flag Faustus5 August 4, 2012 2:47 PM EDT

Aug 4, 2012 -- 2:23PM, farragut wrote:

Sounds to me like the learning process.



Yep, that's what neural plasticity is trying to explain, among other things.

Flag Faustus5 August 4, 2012 3:01 PM EDT

Aug 4, 2012 -- 2:38PM, Mesothet wrote:

Are you trying to imply that when a person utilizes their capacity of imagination (take Einstein’s riding on a beam of light, if this suits your fancy a bit better than the mere realities of daydream and REM dreams) that such person does not then perceive imaginary object(s) as other to him/herself? That because such realities are not objectively, tangibly, real in equal measure to one and all they then “don’t actually exist” as pertains human *consciousness*?


No, I'm talking about instances where you use dualistic or humuncular language to describe the "mind's eye". For instance, where you described a self that "chooses" which coalition of "voting" neurons to go with, among other instances.


You perhaps don't intend to be interpreted that way, but it's difficult to think of another way to interpret such statements.


As far as I can tell, neuroscience doesn't have any particular problem modeling what you describe in the quote above.


Aug 4, 2012 -- 2:38PM, Mesothet wrote:

By what *standard* do you then differentiate “science” (as it here pertains to consciousness) from outright dogma as pertains to true, absolute, reality? Take time to think about this one, please.


I don't see this as being a problem that anyone needs to be concerned about. In all academic endeavors, ideas have to battle it out. Some of them become entrenched, but if the facts favor the opposition, the opposition will win over time.


What in the world does any of that have to do with this thread?


Aug 4, 2012 -- 2:38PM, Mesothet wrote:

To simplify, in which way is the “mind’s eye” an actually nonexistent reality that is actually only mythical (in this context, mythical meaning, again, “non-existent”)?


If your picture of the "mind's eye" is that of some entity separate from the brain to which brain processes present information or are manipulated by, THAT entity does not exist. Or rather, no evidence exists that it does. If that picture of what you are advocating is a strawman, then I apologize, but I can't think of another interpretation until you help me out to understand how your "mind's eye" is different than that.

Flag Mesothet August 4, 2012 4:13 PM EDT

No, I'm talking about instances where you use dualistic or humuncular language to describe the "mind's eye". For instance, where you described a self that "chooses" which coalition of "voting" neurons to go with, among other instances.




You perhaps don't intend to be interpreted that way, but it's difficult to think of another way to interpret such statements.



You are projecting your own physicalist assumptions upon something that . . . presuming you’ve actually read the posts that I’ve so far posted . . . has absolutely nothing to do with the argument I am here upholding. I have already stated—and illustrated, see again post #77 for e.g.--that there is no “homunculus” to what I am arguing for. Neither do you seem to take any degree of respectful regard for my quite repeatedly stated position: one of philosophical idealism—and *not* that of substance dualism. Or are you utterly unknowledgeable as to this ontological stance? If so, please review via wikipedia before acting as thought it does not exist—this if you’ve never bothered with perspectives such as those of  “biocentrism” as was previously posted.


Let’s review before we go any further (it seems all this has gotten lost somewhere along the way).


-- What exactly is the difference between a perceiving-entity and a non-perceiving-entity to you?


-- Can there be such a thing as “perception” devoid of a “subjective perceiver”—regardless of how alien such may be relative to us?  


 ------------------


Also, please—as a personal courtesy—explain to me in which ways this might be false: valid-logic = speculative-opinion. [Such apraisal kinda reminds of those who uphold Bibilical literlism--for they too typically assert the same conclusion]


For, you see, I have already presented a logical argument for the rudimentary aspect of consciousness which you have acknowledged in post # 94 to have no logical faults. But which you decree to not be “scientific” (???) meaning what? Let me be clear on this: How is the logical argument discordant to ANY empirical data (data does not equal ontological interpretation of existence--unless you know something I don't, in which case, please reference it)? Also, as far as the terms (I presume that of “perception”) being “non-useful” take the time in explaining why this might be so (I am not accustomed to blindly following authoritative decrees, and without a rational argument for your assertions one might as well take them for such ).


[I too do not much enjoy going around in circles forever.]

Flag Faustus5 August 4, 2012 5:11 PM EDT

Aug 4, 2012 -- 4:13PM, Mesothet wrote:

You are projecting your own physicalist assumptions upon something that . . . presuming you’ve actually read the posts that I’ve so far posted . . . has absolutely nothing to do with the argument I am here upholding.


Well, I can't even tell what argument you are trying to uphold!


Aug 4, 2012 -- 4:13PM, Mesothet wrote:

I have already stated—and illustrated, see again post #77 for e.g.--that there is no “homunculus” to what I am arguing for.


Sorry, don't know how I missed that.


Aug 4, 2012 -- 4:13PM, Mesothet wrote:

Neither do you seem to take any degree of respectful regard for my quite repeatedly stated position: one of philosophical idealism—and *not* that of substance dualism.


Sorry, I also missed the part where you repeatedly and clearly threw your lot in with idealism, too.


Aug 4, 2012 -- 4:13PM, Mesothet wrote:

Or are you utterly unknowledgeable as to this ontological stance?


I'm a philosophy major, of course I've heard about it.  It just is too ridiculous to take seriously, in my opinion.


I've never met someone who actually endorsed it--and I've been debating philosophy on internet discussion boards for something like a dozen years. So I probably didn't "see" your leanings toward idealism because I was straining to find another way to interpret your posts.


Aug 4, 2012 -- 4:13PM, Mesothet wrote:

-- What exactly is the difference between a perceiving-entity and a non-perceiving-entity to you?


I think it would be best to only say an entity perceives when it can react to information in its environment for goal directed/potentially beneficial behavior.


Aug 4, 2012 -- 4:13PM, Mesothet wrote:

-- Can there be such a thing as “perception” devoid of a “subjective perceiver”—regardless of how alien such may be relative to us? 


In the sense above, yes - an entity that was utterly mindless and mechanical could use information in its environment to guide behavior.


Aug 4, 2012 -- 4:13PM, Mesothet wrote:

Also, please—as a personal courtesy—explain to me in which ways this might be false: valid-logic = speculative-opinion.


Um. . .what?


Aug 4, 2012 -- 4:13PM, Mesothet wrote:

For, you see, I have already presented a logical argument for the rudimentary aspect of consciousness which you have acknowledged in post # 94 to have no logical faults.


No one disputes that consciousness emerges bit by bit and that there are agents with rudimentary aspects of what humans have. There are no logical contradictions in the idea.


Aug 4, 2012 -- 4:13PM, Mesothet wrote:

But which you decree to not be “scientific” (???) meaning what?


There doesn't seem to be much science in what you are trying to say. It isn't that you write things in this thread that go against science; you just don't seem motivated to couch what you want to say in scientific terms. As an idealist, I suppose that makes some amount of sense. It just baffles me why someone wouldn't eagerly embrace vocabularies and tools that have been so successful at explaining consciousness.


But I know you are not alone in this -- I encounter this disdain for scientific approaches all the time.


Aug 4, 2012 -- 4:13PM, Mesothet wrote:

Also, as far as the terms (I presume that of “perception”) being “non-useful” take the time in explaining why this might be so. . .


I think I've already given versions of this already: if you open up a concept so widely that almost everything counts as an instance of it (rocks perceive the hammer, amoeba's are aware), then you diminish the meaning of the term to the point where it no longer performs useful work.


We want to employ a vocabulary that allow us to differentiate levels of mind or mindful behavior that goes from the utterly mechanical and mindless (a virus) leading in various states to entities we know for certain are consciousness, humans. To do that there should be boundaries--they will necessarily be fuzzy and that's a good thing--that make us squeamish about applying these vocabularies in certain instances.


Here's a hierarchy I propose: we start with perception (very low level--maybe in some sense a bacteria perceives), to awareness (should only be possible in a mind complex enough to model and categorize what it perceives), to the self consciousness Stardustpilgrim has mentioned (very, very hard to find, if one uses the current popular litmus test of being able to recognize one's self in a mirror, which only a handful of animals can do), and finally arriving at consciousness, which today I would argue we should only grant to language using humans.


I think this is a far wiser and useful way to use terms that are applicable to aspects of mind than making everything somehow "mentalistic", which doesn't do much for anyone other than letting them sound spacey.

Flag stardustpilgrim August 4, 2012 6:57 PM EDT

Aug 4, 2012 -- 5:11PM, Faustus5 wrote:


Aug 4, 2012 -- 4:13PM, Mesothet wrote:

You are projecting your own physicalist assumptions upon something that . . . presuming you’ve actually read the posts that I’ve so far posted . . . has absolutely nothing to do with the argument I am here upholding.


Well, I can't even tell what argument you are trying to uphold!


Aug 4, 2012 -- 4:13PM, Mesothet wrote:

I have already stated—and illustrated, see again post #77 for e.g.--that there is no “homunculus” to what I am arguing for.


Sorry, don't know how I missed that.


Aug 4, 2012 -- 4:13PM, Mesothet wrote:

Neither do you seem to take any degree of respectful regard for my quite repeatedly stated position: one of philosophical idealism—and *not* that of substance dualism.


Sorry, I also missed the part where you repeatedly and clearly threw your lot in with idealism, too.


Aug 4, 2012 -- 4:13PM, Mesothet wrote:

Or are you utterly unknowledgeable as to this ontological stance?


I'm a philosophy major, of course I've heard about it.  It just is too ridiculous to take seriously, in my opinion.


I've never met someone who actually endorsed it--and I've been debating philosophy on internet discussion boards for something like a dozen years. So I probably didn't "see" your leanings toward idealism because I was straining to find another way to interpret your posts.


Aug 4, 2012 -- 4:13PM, Mesothet wrote:

-- What exactly is the difference between a perceiving-entity and a non-perceiving-entity to you?


I think it would be best to only say an entity perceives when it can react to information in its environment for goal directed/potentially beneficial behavior.


Aug 4, 2012 -- 4:13PM, Mesothet wrote:

-- Can there be such a thing as “perception” devoid of a “subjective perceiver”—regardless of how alien such may be relative to us? 


In the sense above, yes - an entity that was utterly mindless and mechanical could use information in its environment to guide behavior.


Aug 4, 2012 -- 4:13PM, Mesothet wrote:

Also, please—as a personal courtesy—explain to me in which ways this might be false: valid-logic = speculative-opinion.


Um. . .what?


Aug 4, 2012 -- 4:13PM, Mesothet wrote:

For, you see, I have already presented a logical argument for the rudimentary aspect of consciousness which you have acknowledged in post # 94 to have no logical faults.


No one disputes that consciousness emerges bit by bit and that there are agents with rudimentary aspects of what humans have. There are no logical contradictions in the idea.


Aug 4, 2012 -- 4:13PM, Mesothet wrote:

But which you decree to not be “scientific” (???) meaning what?


There doesn't seem to be much science in what you are trying to say. It isn't that you write things in this thread that go against science; you just don't seem motivated to couch what you want to say in scientific terms. As an idealist, I suppose that makes some amount of sense. It just baffles me why someone wouldn't eagerly embrace vocabularies and tools that have been so successful at explaining consciousness.


But I know you are not alone in this -- I encounter this disdain for scientific approaches all the time.


Aug 4, 2012 -- 4:13PM, Mesothet wrote:

Also, as far as the terms (I presume that of “perception”) being “non-useful” take the time in explaining why this might be so. . .


I think I've already given versions of this already: if you open up a concept so widely that almost everything counts as an instance of it (rocks perceive the hammer, amoeba's are aware), then you diminish the meaning of the term to the point where it no longer performs useful work.


We want to employ a vocabulary that allow us to differentiate levels of mind or mindful behavior that goes from the utterly mechanical and mindless (a virus) leading in various states to entities we know for certain are consciousness, humans. To do that there should be boundaries--they will necessarily be fuzzy and that's a good thing--that make us squeamish about applying these vocabularies in certain instances.


Here's a hierarchy I propose: we start with perception (very low level--maybe in some sense a bacteria perceives), to awareness (should only be possible in a mind complex enough to model and categorize what it perceives), to the self consciousness Stardustpilgrim has mentioned (very, very hard to find, if one uses the current popular litmus test of being able to recognize one's self in a mirror, which only a handful of animals can do), and finally arriving at consciousness, which today I would argue we should only grant to language using humans.


I think this is a far wiser and useful way to use terms that are applicable to aspects of mind than making everything somehow "mentalistic", which doesn't do much for anyone other than letting them sound spacey.




I spotted a somewhat kindred soul in Mesothet from his earliest posts. To me there are essentially two views of reality and of consciousness. Either (Supreme Ordering) Consciousness precedes the material universe or consciousness is derivative of physicalism. If believing that consciousness comes first makes me an idealist, I'm an idealist.


Science means that I can objectively demonstrate to others what corresponds to reality.


But I also think it not imagination if I can subjectively verify that (self)consciousness can exist apart from considerations of time, space and materiality (the fact that this is what quantum entanglement demonstrates, doesn't seem to count as evidence). I don't put that out there often, as once nailed "spacey", people put you on their 'don't reply to that dude' list....or even, don't read that dude....


Saying that to say that if you want to have a conversation with Faustus, you have to have the data to back up what you say. I respect that.


But, just to keep the (my) record straight, the hierarchy is perception, awareness, consciousness and then self-consciousness.


But, Faustus, nice you included me......read and remembered....


(It actually takes some effort to get the distinction between consciousness and self-consciousness....but once you get it, it's a better state to be in........think of what you can do completely from habit, drive home from work on *auto-pilot*.....to seeing a wreck has closed your lane ahead...you have to slow down and move to the left lane (your state moving from more-or-less unconscious to conscious) to the shock of seeing that the person in the wreck is your wife.... lying on the ground and what that means to you...self-consciousness...examples of whole nations getting their consciousness altered, Pearl Harbor, the assassination of JFK [my sixth grade teacher came to the front of the class and said, boys and girls, our President has been shot.], Challenger exploding [I was working in a bathroom of an unfinished house, it was cold as hell, NC, heard it on the radio], 9-11 [I had just gotten into my truck, was driving down a blue-gravel driveway, a lady on the radio was saying...no!, no!, you don't understand!, a second! plane has hit the second! tower!]. A hint of what's-what, with moments of higher consciousness, the moment is seared into your memory. Moments of lower consciousness, virtually zero memory).


Needless to say, when I converse with Faustus, I try to do so from his territory.


sdp      

Flag Blü August 4, 2012 7:53 PM EDT

Mesothet


I've only read certain posts in this thread.  They're enough to show me that you still have the problems you had here.


Once again you present an emotional position.  You want it to be a factual and reasoned one when it isn't; and that's why you can't articulate it properly.  You use jargon and convoluted prolixity to obscure the lack.


So your first job is still to unmuddle your own thinking.  Formulate simple definitions of your terms; and clear short statements of your data; and transparent lines of reasoning from them (as I tried and failed to get you to do in that linked thread). Then you'll at last understand what you want to say and be able to convey it in simple terms to others.



Flag Mesothet August 5, 2012 10:53 AM EDT

:) Blu, Stop being so hypocritical about your own emotionally hewed attempts at character attacks and try to stick to the point. At the very least read posts #80 & #84 as background and then read post #86. Find logical fault with either post #80 or #86. Then present *logical* faults with post #80 or post #86, be it in terminology or in argument—this as a rational person would. May you not become juvenilely insulted on account of common sense reasoning as to how to go about an argument of premises and conclusions—even one that isn’t necessarily in tune with your personally upheld preconceptions.


As it turns out, F5 so far claims that post #86 has no logical faults. Double check though, F5 may well be wrong. And besides, I know too well that the argument presented was overly condensed.


----------------------------



I'm a philosophy major, of course I've heard about it.  It just is too ridiculous to take seriously, in my opinion




I've never met someone who actually endorsed it--and I've been debating philosophy on internet discussion boards for something like a dozen years. So I probably didn't "see" your leanings toward idealism because I was straining to find another way to interpret your posts.



F5, as to the difference between physicalism and philosophical idealism, the “it’s just too ridiculous/silly/absurd to take seriously” argument doesn’t actually say much in the way of reasoning/objective-reality. After all, YECs use this very same argument against such things as biological evolution or a gravitational singularity that holds zero volume and infinite energy. The “silly argument” isn’t quite conclusive, is it.


Issues of reasoning for a particular ontology probably belong on a different thread, however. Such as the one that Blu linked to.


Thank you for at last acknowledging (based on what I’ve both stated and illustrated) that a homunculus or dualism have nothing to do with my stance concerning consciousness.

Flag Faustus5 August 5, 2012 11:17 AM EDT

Aug 5, 2012 -- 10:53AM, Mesothet wrote:

F5, as to the difference between physicalism and philosophical idealism, the “it’s just too ridiculous/silly/absurd to take seriously” argument doesn’t actually say much in the way of reasoning/objective-reality.


It wasn't an argument, it was a statement of my attitude, and the attitude of most modern philosophers at that. Idealism is dead, and dead for a reason.

Flag Mesothet August 5, 2012 11:30 AM EDT

Aug 4, 2012 -- 5:11PM, Faustus5 wrote:


Aug 4, 2012 -- 4:13PM, Mesothet wrote:

-- What exactly is the difference between a perceiving-entity and a non-perceiving-entity to you?


 I think it would be best to only say an entity perceives when it can react to information in its environment for goal directed/potentially beneficial behavior.


Aug 4, 2012 -- 4:13PM, Mesothet wrote:

-- Can there be such a thing as “perception” devoid of a “subjective perceiver”—regardless of how alien such may be relative to us? 


 In the sense above, yes - an entity that was utterly mindless and mechanical could use information in its environment to guide behavior.


 



These two replies, to me, appear to be self-contradictory in a crucial way. Here’s why:


If “perceiving-entity” is defined as goal-oriented and driven toward potential-benefit to self (as we both currently agree it should be defined), then such entity cannot technically be “utterly mindless and [fully] mechanical”. This latter attribute will, after all, fully pertain to non-perceiving-entities. Perception, on the other hand, will technically be one sub-classification of thought. One need not have an occipital lobe transferring processed info into the temporal lobe, etc. in order to perceive. Nevertheless, devoid of some degree of *interpretation of information* (one aspect of active thought), no such thing as perception—as the term is now mutually agreed upon—can occur. And, where there is any degree of thought-process there will likewise be some degree of mind—this, or course, does not them mean a human mind..


For a perceiving-entity to be such it will need to hold some degree of subjective-reality. Again, as per post #86, such subjective-reality—regardless of how miniscule—will *at minimum* need to endow inbound information with the meaning of positive, negative, or neutral valence: otherwise—please correct me if I’m wrong—such entity will in no way hold any means pursuing paths of potential benefit to itself.


There may be much disagreement in what I’ve just argued. Please indicate the premises/arguments which you find fault with.


 


 

Flag Faustus5 August 5, 2012 1:04 PM EDT

Aug 5, 2012 -- 11:30AM, Mesothet wrote:

These two replies, to me, appear to be self-contradictory in a crucial way. Here’s why:


If “perceiving-entity” is defined as goal-oriented and driven toward potential-benefit to self (as we both currently agree it should be defined), then such entity cannot technically be “utterly mindless and [fully] mechanical”.


Video game characters controlled by primitive artificial intelligence can perceive events in their virtual worlds and react intelligently with goal-directed behavior, sometimes with sophistication far beyond the amoeba or even any species of worm.  They are still utterly mindless and fully mechanical.  Or maybe I should they are are "pretty mindless", since we see in them the beginnings of minds.


Aug 5, 2012 -- 11:30AM, Mesothet wrote:

Nevertheless, devoid of some degree of *interpretation of information* (one aspect of active thought), no such thing as perception—as the term is now mutually agreed upon—can occur


Such interpretations can be minimal to the point of irrelevance. Bees regularly clean their hive of dead sisters. It isn't because they understand and interpret their dead sisters to be dead, but because dead bees exude a chemical which other bees find unpleasant. They will treat any object in their hive coated with that chemical  the same way. The lesson here is that lots of behaviors that look intelligent are purely mechanical. Nature has lots of ways of getting things done with minimal intelligence.


Aug 5, 2012 -- 11:30AM, Mesothet wrote:

For a perceiving-entity to be such it will need to hold some degree of subjective-reality. Again, as per post #86, such subjective-reality—regardless of how miniscule—will *at minimum* need to endow inbound information with the meaning of positive, negative, or neutral valence: otherwise—please correct me if I’m wrong—such entity will in no way hold any means pursuing paths of potential benefit to itself.


Nope, perception can be completely mechanical. I don't think it makes any sense at all to start talking about subjective reality until you get to animals with higher brain functions.

Flag Mesothet August 5, 2012 1:38 PM EDT

Aug 5, 2012 -- 1:04PM, Faustus5 wrote:


Aug 5, 2012 -- 11:30AM, Mesothet wrote:

These two replies, to me, appear to be self-contradictory in a crucial way. Here’s why:


If “perceiving-entity” is defined as goal-oriented and driven toward potential-benefit to self (as we both currently agree it should be defined), then such entity cannot technically be “utterly mindless and [fully] mechanical”.


 Video game characters controlled by primitive artificial intelligence can perceive events in their virtual worlds and react intelligently with goal-directed behavior, sometimes with sophistication far beyond the amoeba or even any species of worm.  They are still utterly mindless and fully mechanical.  Or maybe I should they are are "pretty mindless", since we see in them the beginnings of minds.



You here conflate perception with your notions of “intelligence”. What, then, to you is intelligence? Please provide a definition that would not then apply to an inanimate entity such as, say, a thermostat, a car, a computer, etc. . . . else you will once again enter a realm of absolute animism wherein any entropic entity itself will be deemed to hold “intelligence”.


Aug 5, 2012 -- 1:04PM, Faustus5 wrote:


Aug 5, 2012 -- 11:30AM, Mesothet wrote:

Nevertheless, devoid of some degree of *interpretation of information* (one aspect of active thought), no such thing as perception—as the term is now mutually agreed upon—can occur


 Such interpretations can be minimal to the point of irrelevance. Bees regularly clean their hive of dead sisters. It isn't because they understand and interpret their dead sisters to be dead, but because dead bees exude a chemical which other bees find unpleasant. They will treat any object in their hive coated with that chemical  the same way. The lesson here is that lots of behaviors that look intelligent are purely mechanical. Nature has lots of ways of getting things done with minimal intelligence.



Again, you are needlessly conflating *interpretation of information* with your own undefined notions of intelligence. (see above)


Once again, how can one logically apraise perception be in any way possible devoid of any interpretation of information.


Aug 5, 2012 -- 1:04PM, Faustus5 wrote:


Aug 5, 2012 -- 11:30AM, Mesothet wrote:

For a perceiving-entity to be such it will need to hold some degree of subjective-reality. Again, as per post #86, such subjective-reality—regardless of how miniscule—will *at minimum* need to endow inbound information with the meaning of positive, negative, or neutral valence: otherwise—please correct me if I’m wrong—such entity will in no way hold any means pursuing paths of potential benefit to itself.


 Nope, perception can be completely mechanical. I don't think it makes any sense at all to start talking about subjective reality until you get to animals with higher brain functions.



You are, I hope unintentionally, utterly ignoring the very logistic argument presented:


As a philosophy major, please take the time to explain yourself: how can you logically validate your assertion that any entity can “pursue beneficial futures to its own self” devoid of subjective attribution of positive, negative, or neutral valence to information that it perceives (in the present as would be relevant to its own future self)?


Needlessly convoluting a logistical argument with examples that you seem to think provide definitive evidence (ultimately, of countless unsubstantiated presumptions), will not in any way give merit to any logical argument you may provide. 



Flag Faustus5 August 5, 2012 1:53 PM EDT

Aug 5, 2012 -- 1:38PM, Mesothet wrote:

You here conflate perception with your notions of “intelligence”.


Intelligence is like a Swiss army knife: lots of components. Perception is just one of those components. And it still stands that near-mindless AI is capable of perception and can use that perception for goal directed behavior.


Aug 5, 2012 -- 1:38PM, Mesothet wrote:

What, then, to you is intelligence? Please provide a definition that would not then apply to an inanimate entity such as, say, a thermostat, a car, a computer, etc. . . . else you will once again enter a realm of absolute animism wherein any entropic entity itself will be deemed to hold “intelligence”.


I'm afraid that it is perfectly valid to apply "intelligence" to computers and to even parts of computers. If you want a definition of intelligence, consult a dictionary.


Aug 5, 2012 -- 1:38PM, Mesothet wrote:

Again, you are needlessly conflating *interpretation of information* with your own undefined notions of intelligence. (see above)


Interpretation is another component of intelligence. And it still stands that acts of perception can occur with a bare minimum of what we would want to call "interpretation".


Aug 5, 2012 -- 1:38PM, Mesothet wrote:

As a philosophy major, please take the time to explain yourself: how can you logically validate your assertion that any entity can “pursue beneficial futures to its own self” devoid of subjective attribution of positive, negative, or neutral valence to information that it perceives (in the present as would be relevant to its own future self)?


Easily, since I don't allow that the concept of "subjective attribution" makes sense outside of the context of a complex nervous system or the equivalent.


Aug 5, 2012 -- 1:38PM, Mesothet wrote:

Needlessly convoluting a logistical argument with examples that you seem to think provide definitive evidence (ultimately, of countless unsubstantiated presumptions), will not in any way give merit to any logical argument you may provide.


You are needlessly obsessed with "logic". An argument can be both logically valid in every possible way and yet also utterly without merit or usefulness.

Flag Mesothet August 5, 2012 2:29 PM EDT

Aug 5, 2012 -- 1:53PM, Faustus5 wrote:


Aug 5, 2012 -- 1:38PM, Mesothet wrote:

You here conflate perception with your notions of “intelligence”.


Intelligence is like a Swiss army knife: lots of components. Perception is just one of those components. And it still stands that near-mindless AI is capable of perception and can use that perception for goal directed behavior.


Aug 5, 2012 -- 1:38PM, Mesothet wrote:

What, then, to you is intelligence? Please provide a definition that would not then apply to an inanimate entity such as, say, a thermostat, a car, a computer, etc. . . . else you will once again enter a realm of absolute animism wherein any entropic entity itself will be deemed to hold “intelligence”.


I'm afraid that it is perfectly valid to apply "intelligence" to computers and to even parts of computers. If you want a definition of intelligence, consult a dictionary.


Aug 5, 2012 -- 1:38PM, Mesothet wrote:

Again, you are needlessly conflating *interpretation of information* with your own undefined notions of intelligence. (see above)


Interpretation is another component of intelligence. And it still stands that acts of perception can occur with a bare minimum of what we would want to call "interpretation".


Aug 5, 2012 -- 1:38PM, Mesothet wrote:

As a philosophy major, please take the time to explain yourself: how can you logically validate your assertion that any entity can “pursue beneficial futures to its own self” devoid of subjective attribution of positive, negative, or neutral valence to information that it perceives (in the present as would be relevant to its own future self)?


Easily, since I don't allow that the concept of "subjective attribution" makes sense outside of the context of a complex nervous system or the equivalent.


Aug 5, 2012 -- 1:38PM, Mesothet wrote:

Needlessly convoluting a logistical argument with examples that you seem to think provide definitive evidence (ultimately, of countless unsubstantiated presumptions), will not in any way give merit to any logical argument you may provide.


You are needlessly obsessed with "logic". An argument can be both logically valid in every possible way and yet also utterly without merit or usefulness.




Wow . . . and so consciousness/perception/intelligence/etc. to you is basically whatever you FEEL comfortable with it being. Because you and the cohort you pertain to so decrees based on anything other than logically sound arguments: self-contradictory statements (such as those you've given of intelligence) apparently being to you perfectly acceptable.  


What can I say . . . You are however quite right . . . I do prefer to base my arguments of reality based on logic. Sorry (?).

Flag Faustus5 August 5, 2012 2:53 PM EDT

Aug 5, 2012 -- 2:29PM, Mesothet wrote:

Wow . . . and so consciousness/perception/intelligence/etc. to you is basically whatever you FEEL comfortable with it being.


All I'm trying to do is bring you in line with how scholars in the field of consciousness studies use these terms. I'm not saying they would agree with all of my positions, as some of my positions are controversial (though we haven't discussed any of THEM yet). Perception is considered an aspect of intelligence, and intelligence is considered an aspect of consciousness. They have different meanings but all overlap at various points.


Aug 5, 2012 -- 2:29PM, Mesothet wrote:

Because you and the cohort you pertain to so decrees based on anything other than logically sound arguments: self-contradictory statements (such as those you've given of intelligence) apparently being to you perfectly acceptable.


I haven't made a single self-contradictory statement. I just don't use certain concepts the way you want me to. You'll just have to get used to it.

Flag newchurchguy August 6, 2012 9:38 AM EDT

Aug 5, 2012 -- 1:38PM, Mesothet wrote:


Aug 5, 2012 -- 1:04PM, Faustus5 wrote:


Aug 5, 2012 -- 11:30AM, Mesothet wrote:

These two replies, to me, appear to be self-contradictory in a crucial way. Here’s why:


If “perceiving-entity” is defined as goal-oriented and driven toward potential-benefit to self (as we both currently agree it should be defined), then such entity cannot technically be “utterly mindless and [fully] mechanical”.


 Video game characters controlled by primitive artificial intelligence can perceive events in their virtual worlds and react intelligently with goal-directed behavior, sometimes with sophistication far beyond the amoeba or even any species of worm.  They are still utterly mindless and fully mechanical.  Or maybe I should they are are "pretty mindless", since we see in them the beginnings of minds.



You here conflate perception with your notions of “intelligence”. What, then, to you is intelligence? Please provide a definition that would not then apply to an inanimate entity such as, say, a thermostat, a car, a computer, etc. . . . else you will once again enter a realm of absolute animism wherein any entropic entity itself will be deemed to hold “intelligence”.




Thermostats actually do behave physically.  Measured values for mechanical actions are involved.   They respond mechanically to environmental feedback in a determined way.  The embedded logic, from the intelligence of a human designer, reacts with a fixed action of response to temperature changes.  I would say they are mindless, beyond the embedded logic.


Virtual characters do not behave physically.  No matter changes structure and no energy is expended by their agency.  Like showbiz magic, we are distracted and tricked into belief in their agency.  The are simulations, like math models and object-oriented programs are simulations with real consquequences limited to a virtual space.


F5 has attacked your view of science.  F5 is a bright young guy, but he has limited knowledge of the practices of material science.  I hope he does state his opinions and personal ideas on mind vs brain sometime.  I would be genuinly interested.  As for your statements, I think you hit the nail on the head that top-down casuation is a crucial issue in the modern debate.  I have not seen any problem in your sincere efforts to define your views.  I value your posts as considered opinion.


Physical science is wonderful because of the ability to measure forces, mass, extention and time.  Casuation in many cases can be modeled very clearly.  In material science and physics --> the parameters are defined by standarized units of measure.  With the values of these measurements and the laws of nature - bottoms-up analysis is rather complete at the levels of reality below systems analysis.


Top-down causation through guidance or instruction requires information science units of measure.

Flag Faustus5 August 7, 2012 5:56 PM EDT

Aug 6, 2012 -- 9:38AM, newchurchguy wrote:

F5 has attacked your view of science.  F5 is a bright young guy, but he has limited knowledge of the practices of material science.


It is certainly limited, but it easily surpasses yours. Remember, I'm not the one with the reputation of making citations that routinely don't say what I claim  they do. That's your gig around here and everyone knows it.

Flag Mesothet August 7, 2012 9:59 PM EDT

My head’s still swirling from the derisive retort that, and I quote, “An argument can be both logically valid in every possible way and yet also utterly without merit or usefulness.”


Mind you, not “in some ways” but “in every possible way” . . .


Is this intended to be a logical conclusion as to why not to listen to reason? An illogical conclusion as to why not to listen to reason? You got me! . . . I’m speechless.


One thing though. Unless one is enamored with following the most authoritarian ego in town because they can pound their fist with hair on end more than, um . . . the other “evolved” animal of its species, please read up on what science actually is and signifies.


Here’s a nifty start: en.wikipedia.org/wiki/Science


Science is not about arrogance, dogma, irrationality, character attacks, well . . . there’s actually a long list of what science is not.


No one is perfect, but science itself, in a nutshell, actually endeavors to be about impartiality in the discovery of truth.


I’ll try to recuse myself from any further “snap-pow” postings on this tread. All the best. 

Flag Blü August 8, 2012 9:23 AM EDT

Mesothet


Stop being so hypocritical about your own emotionally hewed attempts at character attacks and try to stick to the point.


No one ever tried harder than I did to get you to work out what you yourself were trying to say.  I knew I'd failed when you asked me what your argument was.


So when I see the like problems here, I feel entitled to draw them to your attention.  I confess to some exasperation but no malice.


Flag newchurchguy August 8, 2012 10:40 AM EDT

Aug 5, 2012 -- 1:04PM, Faustus5 wrote:


Aug 5, 2012 -- 11:30AM, Mesothet wrote:

These two replies, to me, appear to be self-contradictory in a crucial way. Here’s why:


If “perceiving-entity” is defined as goal-oriented and driven toward potential-benefit to self (as we both currently agree it should be defined), then such entity cannot technically be “utterly mindless and [fully] mechanical”.


Video game characters controlled by primitive artificial intelligence can perceive events in their virtual worlds and react intelligently with goal-directed behavior, sometimes with sophistication far beyond the amoeba or even any species of worm.  They are still utterly mindless and fully mechanical.  Or maybe I should they are are "pretty mindless", since we see in them the beginnings of minds.




F5,


Read your above argument that the zero's and ones of a video game character constitute goal-oriented behavior in the real world, rather than the logic of a programmer creating the illusion of such.


AI has developed to the point that pre-programmed goal-oriented logic can adapt to a world-class level of chess competition.  This is all natural and rule bound to science laws.  However, there is no cross-over to the natural world, regarding the personal emotional motivation of digits.


Virtual characters do not behave physically.  No matter changes structure and no energy is expended by their agency.  Like showbiz magic, we are distracted and tricked into belief in their agency.  The are simulations, like math models and object-oriented programs are simulations with real consquequences limited to a virtual space. -ncg 



You may be thinking in terms of the "singularity" of R. Kurzweil - but I don't think even he would say that Sonic the Hedgehog is actually a perceptional agent in reality.

Flag newchurchguy August 8, 2012 11:03 AM EDT

from Wiki:


In July 2009, many prominent Singularitarians participated in a conference organized by the Association for the Advancement of Artificial Intelligence (AAAI) to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard (i.e. cybernetic revolt). They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They warned that some computer viruses can evade elimination and have achieved "cockroach intelligence." They asserted that self-awareness as depicted in science fiction is probably unlikely, but that there were other potential hazards and pitfalls.[7] Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[8] The President of the AAAI has commissioned a study to look at this issue.[9]


 Controversy


Often deriding the Singularity as "the Rapture of the Nerds",[10] some critics argue that Singularitarianism is one of many new religious movements promising salvation in a near-future technological utopia.[2] Science journalist John Horgan wrote:



Let's face it. The singularity is a religious rather than a scientific vision.

 


Flag Faustus5 August 9, 2012 7:08 AM EDT

Aug 7, 2012 -- 9:59PM, Mesothet wrote:

Is this intended to be a logical conclusion as to why not to listen to reason? An illogical conclusion as to why not to listen to reason? You got me! . . . I’m speechless.


An argument can be perfecctly logical and yet be filled with false premises or irrelevant conclusions.


Aug 7, 2012 -- 9:59PM, Mesothet wrote:

Science is not about arrogance, dogma, irrationality, character attacks, well . . . there’s actually a long list of what science is not.


Thanks for that. I wouldn't have known had you not posted it.


Get back to us when you have something substantial and relevant to offer.

Flag Faustus5 August 9, 2012 7:11 AM EDT

Aug 8, 2012 -- 10:40AM, newchurchguy wrote:

F5,


Read your above argument that the zero's and ones of a video game character constitute goal-oriented behavior in the real world, rather than the logic of a programmer creating the illusion of such.


I don't recognize the "real" versus "illusion" distinction you are attempting to assert.


Aug 8, 2012 -- 10:40AM, newchurchguy wrote:

You may be thinking in terms of the "singularity" of R. Kurzweil - but I don't think even he would say that Sonic the Hedgehog is actually a perceptional agent in reality.


The singularity was the last thing on my mind, and yes, Kurzweil believes AI is progressing towards real machine consciousness.

Flag newchurchguy August 9, 2012 9:12 AM EDT

Aug 9, 2012 -- 7:11AM, Faustus5 wrote:


Aug 8, 2012 -- 10:40AM, newchurchguy wrote:

F5,


Read your above argument that the zero's and ones of a video game character constitute goal-oriented behavior in the real world, rather than the logic of a programmer creating the illusion of such.


I don't recognize the "real" versus "illusion" distinction you are attempting to assert.




How a rational science-based viewpoint, becomes valid, is through data measured with universally recognized units of measure.  Things, events and processes that have no measurable values are not considered to be physically real.  Virtual events do not have measured values of mass, extension or force.. 


That's how you can tell real beings apart from virtual entities.  Reacting emotionally to virtual events brings a virtual effect into reality, but only in the sense that the mutual information transferred to the person has electro-chemical responses.   Belief in virtual agents, as alive and meaningful, like living beings is delusion and know as reification.


Note that reification is generally accepted in literature and other forms of discourse where reified abstractions are understood to be intended metaphorically,[1] but the use of reification in logical arguments is usually regarded as a fallacy. In rhetoric, it may be sometimes difficult to determine if reification was used correctly or incorrectly.



Output from information processing does have measured effects - such as design, logical relations and organization of resources.  These units of measure are specified in information science.  Reification on the virtual level - is a term pointing to an information process with actual measurable effects. (look up reification in computer science)


The output of higher mind such as compassion, loving care and emotional joy - have yet to be measured systemically.

Flag Faustus5 August 10, 2012 6:49 AM EDT

Aug 9, 2012 -- 9:12AM, newchurchguy wrote:


Aug 9, 2012 -- 7:11AM, Faustus5 wrote:


Aug 8, 2012 -- 10:40AM, newchurchguy wrote:

F5,


Read your above argument that the zero's and ones of a video game character constitute goal-oriented behavior in the real world, rather than the logic of a programmer creating the illusion of such.


I don't recognize the "real" versus "illusion" distinction you are attempting to assert.




How a rational science-based viewpoint, becomes valid, is through data measured with universally recognized units of measure.  Things, events and processes that have no measurable values are not considered to be physically real.  Virtual events do not have measured values of mass, extension or force.. 


That's how you can tell real beings apart from virtual entities.  Reacting emotionally to virtual events brings a virtual effect into reality, but only in the sense that the mutual information transferred to the person has electro-chemical responses.   Belief in virtual agents, as alive and meaningful, like living beings is delusion and know as reification.


Note that reification is generally accepted in literature and other forms of discourse where reified abstractions are understood to be intended metaphorically,[1] but the use of reification in logical arguments is usually regarded as a fallacy. In rhetoric, it may be sometimes difficult to determine if reification was used correctly or incorrectly.



Output from information processing does have measured effects - such as design, logical relations and organization of resources.  These units of measure are specified in information science.  Reification on the virtual level - is a term pointing to an information process with actual measurable effects. (look up reification in computer science)


The output of higher mind such as compassion, loving care and emotional joy - have yet to be measured systemically.



This entire rant by you was utterly and completely irrelevant.


Intentional states are states of behavioral disposition that conform to cultural norms of behavior. You do not measure them. When an agent's behaviors conform to the norms for an intentional state, it is valid to attribute that intentional state to that agent. It doesn't matter what the agent is made of or whether it exists in a virtual world.

Post Your Reply
<CTRL+Enter> to submit
Please login to post a reply.
 
    Viewing this thread :: 0 registered and 1 guest
    No registered users viewing
    Advertisement

    Beliefnet On Facebook