Sunday, August 07, 2005

Semantic Meaning

I've been thinking a lot about semantic meaning recently, especially in regard to Godel's Incompleteness Theorem. Anyway, I remain convinced that the meaning of a proposition is found in its method of verification or falsification. I believe that the meaning of a proposition to an individual is itself a theory about the methods of verification for that proposition. I say it is a theory because we generally cannot know with certainty the meaning of a given proposition (Quine's indeterminacy of translation). This is a subtle point because uncertainty in meaning can generally be rendered as small as necessary for any given task. It's only a major limitation when your theory of meaning cannot be verified, not even to yourself.

For example, if you tell me "the quijibo of 5 is 0.2", I will form a theory about the quijibo operator. It might mean an inversion, or it might mean multiplication by 0.04, or any of an infinite number of other possible operations. I will usually assume the simplest one I can think of (humans are simpletons for the most part). I can then evaluate the additional propositions you give me as tests of my quijibo theory.

So, if you give me a mathematical proposition, e.g., "the sum of the squares of 3 and 4 is equal to the square of 5", your proposition is meaningful because I have confidence that I know how to duplicate your computational experiment. I can square the numbers myself and come to the same conclusion (if I get my sums right). Of course, there are many pre-requisites, such as our mathematical axioms, but these can still be communicated sufficiently precisely for the proposition to make sense.

Likewise, propositions about the real world have meaning when a suitable physical experiment can be devised to test the proposition. The proposition need not be about an actual existent object, but the proposition makes sense if I can devise an experiment that would detect it if it did exist.

According to this model of semantic meaning, an AI can only understand a proposition if it can create a theory about what the proposition means, and knows how to perform the computational or physical tests needed to verify it.

7 comments:

Robin Zebrowski said...

The theories about verification would also be propositional, no? In which case, I (naturally) believe you've hit on a key point with "physical" tests :)

(I've always spelled kwyjibo with a k! But I'm a dork.)

Doctor Logic said...

As a matter of fact, this post was originally an email to you, Robin!

Unfortunately, I couldn't quite get from here to your embodiment hypothesis. I realized it might even be used to argue in the opposite direction. Any system that can formulate theories and test them against a reference is capable of understanding. The questions is, what consitutes a reference? Is a simulated world as good a reference as a physical one?

(I only remember the phonetic spelling, but I figured Q was worth 10 points. Maybe it's qwyjibo.)

Robin Zebrowski said...

I guess my question would then be "what does it mean to formulate a theory?"

The reason I ask is the ever-abused thermostat example. I don't think we want to say the thermostat understands anything, but the language can get ugly and really strong AI folks could say the thermostat has beliefs and goals about room temperature which it can check and adjust. I know you aren't saying thermostats have understanding, but the example traditionally serves to show how icky things get when we start attributing 'understanding' to things.

I have no answer for this, but I think it's a bit more difficult that merely a proposition that can be tested against a reference.

Anonymous said...

How hubristic to insist that 'things' cannot have 'understanding.' Perhaps your proverbial thermostat does not have a hominal version of understanding, but by the same token, you do not have the thermostat's understanding of the universe either.

Christopher Trottier said...

Yes, but how do hammocks fit into all of this?

Anonymous said...

good post... thanks.

Lila
my site: gifted child

Doctor Logic said...

Robin,

I agree with you that machines that follow a small set of domain-specific rules are not considered intelligent (except for marketing purposes!).

We generally regard a system as intelligent when it can acquire new theories, not just new facts.

A house robot that has a sense of touch can construct a map of the room that will enable it to navigate without collisions. While this is a common academic AI project, we wouldn't really consider the robot intelligent unless, say, it could formulate theories about moving objects. Otherwise, the robot would be doing nothing more than deducing facts within a predetermined model.

So a thermostat is not intelligent because it cannot find patterns in its measurements, build theories based on those patterns, and know how to test those theories against experience.

I forgot to mention one other thing. I started from a pre-requisite for understanding language propositions. I assumed that language (even if it's just an internal one) would be required for intelligence. It must be necessary if you want to pass the Turing Test, but may not be generally true of all intelligence. However, I suspect that any intelligent physical system (Turing testable or otherwise) can be shown to be equivalent to some sort of language processor, even if the language is physics.