All of intelligence – anything observable that intelligent creatures DO – that is, anything by which you can *tell* something is intelligent – is a *decision*. This could be conscious or subconscious, but a decision nonetheless. You could have done something else or said something else. You said “no” to every other option and chose whatever it is you did. I’ve always been interested in this process of decision-making.
Computer science is my methodology – in order to solve problems, I write computer programs. This is how I naturally approach research, this is what I’m good at, this is what I do.
So I think of *intelligence* as *decision making*; and thus I want *computers* to make *decisions*. In fact, AI is all about computers making decisions. I think that is a *very* deep statement. That is what makes AI both powerful and scary. Talking to a computer does not seem so scary; computers making the world’s decisions does seem scary. But a talking computer is one that *makes decisions* about what to say. I can see why using a statistical/probabilistic framework would have been disconcerting for AI pioneers trying to get computers to make the *right* decisions.
Indeed, natural language generation is part of the very *essence* of AI! The very thing that forms the basis of the Turing test! So in some sense it really is “AI-complete”. That’s exciting, but it also means be careful – pragmatically, I need to find research that is tractable. Perhaps a computer that makes decisions for generating natural language about something specific – like Regina’s football database. Or perhaps a computer that makes decisions for generating a specific type of natural language about something general. What might I mean by a “specific type of natural language”? I’ll have to think about that. Or go ask the linguists.