Thursday, August 11, 2005

Thinking

My lack of formal education in computer science is showing. I've just read an article on Turing machines. I've read about them before, but this time I was actually trying to absorb something.

You see, I would like to formalize my logical positivist/empiricist take on semantic meaning. If human computation is equivalent to a Turing machine, I might be able to prove my statements formally.

The first step would be to establish a formal definition of understanding. From there, one could probably show that the only way for an intelligence to know that it understood something would be by verification. Now that I think about it, that may even be the formal definition of understanding, i.e., the ability to test whether one has a correct theory. Beware circularity, of course.

As Robin pointed out in her comments on my last post, we don't regard thermostats as having "understanding". My answer to her comment was that true intelligence required an ability to formulate knew theories and test them. My new question is, was my standard for intelligent systems too strict? If understanding is the ability to test whether some knowledge is correct, does that in itself require the ability to formulate theories?

Take the thermostat example. The thermostat applies a rule: if temperature T > Tmax then apply cooling, if temperature T < Tmin do nothing. In order to perform this function, it must know the temperature, T. Does the thermostat "understand" T? Does it know that it knows T? I would say that it does not. Without a theory about how T correlates with any other variable, there is no theory other than, say, "T = 68 degrees F". Since the thermostat has a single sensor that returns a temperature value, there is nothing to check against. The thermostat has no way to tell whether the sensor is functioning correctly or not.

Can we build a thermostat that understands temperature? What is the simplest testable theory about temperature we can make?

How about a thermostat with two different kinds of sensors? A theory of current temperature might then claim that if multiple measurements of temperature are made at times t1 and t2, then

Tsensor1(t1) > Tsensor1(t2)

=>

Tsensor2(t1) > Tsensor2(t2)

if the two sensors are measuring the same thing. If the relation does not hold, then they can't both be measuring temperature.

Can we claim of such a thermostat that it understood what the current temperature was? It would certainly not understand the physical concept of temperature, but it would understand what the current value of T was because it has a means of testing whether it knew the current value of T (or at least whether the current value of T was plausibly correct).

Maybe, understanding is too strong a word. There's at least some difference between executing an algorithm to compare the temperature on two sensors, and constructing new algorithms to do the same.

Still, I think I'm on to something here.

1 comment:

rob said...

Thermostats serve a purpose, so right off the bat they are differnt from humans, eh?
I can think of a few offhand.
Keep this computer room at a steady temperature between 70 and 72 degrees.
-or-
Keep me feeling comfortable.

The latter instance may require adjusting the settings a few times as I adjust from outdoor temps to indoor temps. In fact, if I take out the thermostat and replace it with an on/off switch and flip it up and down as I need to it would be even more effective than a thermostat...until I fall asleep that is.

Now, I love tinkering around with my computers but there is no substitute for human computation when dealing with humans in my view.