Cem Bozsahin: a bloggy research statement

(a page for the light-hearted and forgiving; slightly more serious things e.g. papers, tools and talks are in the research page.)

topics: ( linguistics | cogsci | computer science | mind thingy | videos)

Some questions to ponder about forms and meanings: If some identical surface structures have different meanings, how do we learn these meanings? If content is ambiguous (e.g. 'man' in 'the man' and 'man' in 'manned the aircraft with novices'; see Borer for a mind-blowing comic), what fixes their unique structural interpretation?

I am trying to see how far 'neither syntax nor semantics suffices by itself' idea can address these questions.

Here is one personal and more programmatic statement of the problem I can think of to build theories and models to study these questions.

(my apologies for the upcoming verbosity; i wrote what follows to avoid confusing unsuspecting readers who have asked me about things here; my short answers appear to be more confusing.)

(colleagues: please ignore philosophy-of-sciencey chit-chat that follows; this is, after all, the web.)


In short:

I am a computer scientist. To me, it's all about human extended practice, not extended-human practice. I'm trying to understand the scientific, subcultural, technological and philosophical implications of that, assuming the following (and no, i am not a pancomputationalist or born-again computationalist, since you asked. I wrote this and that about that.):

  1. Cognitive science is study of natural minds with computers.
  2. Computer science is about what can be automated to the extent of giving control/data to others without further need of instruction.
  3. Computer engineering is realizing the idea of the computer. (Yes, there is just one)
  4. Computer science and linguistics have similar ways to explore, and seem to stand in contrast with psychology and philosophy. (This one's for a bit of soul searching in cogsci).

Somewhat surprisingly, for some, all of these go back to one chap: Alan Turing. (He got us into trouble from day one by making an unfortunate analogy between humans and computers, but that was a long time ago, and trouble is his second name.)

The first idea is Turing's thesis (not Church-Turing thesis), which is essentially the beginning of cogsci. The second idea is what Knuth and Valiant said, and what Newell and Simon thought about the Turing Machine. The third idea started with Turing's ACE, which is essentially an electronic universal TM (let's face it; other `first computers' were kludges). The fourth idea is now-forgotten collaboration of CS and linguistics, and it's time to get back to it.)

Better minds can tell you the same story in a different way. Dennett might say item 1 leverages intentional stance to think there are also artificial minds. Von Neumann would say item 3 is attempting the question `how can we automate it?'. Item 1 is what I wish Turing had said but he said a bit more than that, and got us into trouble from day one. Speaking of trouble, check out item 4.

As a computer scientist, I moonlight as a linguist and occasional philosopher, and foray into computational linguistics and cogsci, always ending up in my own den.


Slightly longer bits:

The grammar bit

I am a grammarian. (Some say a pig-headed one; I take that as a compliment.) I look at morphology-phonology-lexicon-syntax-semantics relation(s) as a cognitive scientist and try to understand their computational mechanisms. It's easier to call all that just 'grammar'. Grammars are not only for language; plans, audio systems and visual systems (and buildings!) have grammars too. Theory of grammar looks at these problems as indirect association of forms and meanings, with explicit arguments about resource boundedness of their computation.

Grammar is a simple (but not simplistic) tool to explore, and perhaps explain some of the things we explore. The garden variety comes with syntax only, but semantics don't grow on trees, so it becomes more exploratory if we think of grammar as a correspondence.

I study linguistic, computational, cognitive and philosophical aspects of grammars, more or less in this order of involvement. I also look at how perception can give rise to knowledge in a haphazard and bumpy sort of way. Nowadays this is called `dynamics'. I prefer to think of it as clumsy computing of the discrete variety. (Clumsiness comes from complexity in nature, not from bungling agents.) Computationalism of the discrete kind makes a narrower claim in a wider range of cognitive problems than cognitivism, connectionism or dynamical systems. I also happen to believe that it is more testable.

Once we eliminate dichotomy from a grammar, word learning and language acquisition converge onto the same problem, whatever that is. It is an indirect way of attempting to understand the nature of the problem. We are empirically well-grounded in this affair, because human languages are provably non-context-free, and we can de-dichotomize the most restrictive super class of context-free grammars we have found so far, the linear-indexed grammars. The curious thing is that, once we do that, we end up with limited kinds of semantic dependencies, although syntax seems to be so, ehm, infinite. Understanding the limited nature of this problem forces 'infinity' to play second fiddle in linguistics.

Kindly note that I am not promoting finitism, which is a school of mathematics that uses some ideas of constructive mathematics (finite number of operations). I am suggesting that explaining why an infinite space has finite number of properties is more exciting. So i assume from the beginning that there can be potential infinity. (Potential/real infinity goes back to Aristotle, and comes back in linguistics to haunt us.) Saying that that's because we have finite number of rules is not very exciting (not to me, at least), until we say something more about these rules. Once we understand that, we might say, "oh by the way, it's potentially infinite.'' Mathematicians prefer ``arbitrarily large.''

Let me exemplify. Take four words of the set W = {I,you,think,like}. (Actually that's nine word forms but lets not nitpick like a morphologist.) We can create an infinite language from W: I like you. I think I like you. You think I like you. I think you think I like you. You think I think you think I like you, etc.etc. We can say that two rules are at work here, one for transitives like `like' and one for complement-takers like `think'. Right. Then why don't we get: I like I think I like you? Easy: 'like' is not a complement-taker like `think'. So the difference between the two rules explains the odd case. Now, consider why `think' is capable of doing it while `like' is not: it takes a complement that can take its own arguments. If you cannot do that, you're stuck with words like `like'. Now, that's *one* explanation for two rules.

Now assume you want to take on board another process that is commonly unbounded: relativisation. If you go the 'finite rules' way of explaining infinity, there is one more rule to explain, that of relativisation. If you go the finite-dependencies way, you can use the unique explanation above. That's one explanation for three rules. This is not a meaning-determines-form explanation, it is about codeterminism of semantics and constituency. And you can check the syntactic reflex of that in all languages. (What we have done is a mini practice of eschewing von Humboldtian (or generative) `infinity-in-finiteness' explanations, which presume infinity, to turn to Schonfinkelian `explaining infinity'.)

Notice that in the W-experiment we get infinity too: any argument-taking argument can do this, and no argument without arguments can.

One of my greatest teachers (Aryeh Faltz, RIP) once told me that anything finite is inherently boring. It is parochial. Why would anyone write a grammar to understand a finite number of dependencies? just list them all and be done with it. I believed in that dictum for a long time, but now i'm having second thoughts. It all depends on how we deal with finiteness. In this business, numbers can be everything, contra popular belief.

A finite but very large language, say one with 10^200 sentences, needs a search algorithm to see which structures are realised, eventhough, after that search, the meaning seems just retrieval, not composition. Search is needed for an infinite language too, minus the semantic retrieval bit, which one must compose. If we know how to compose meanings, we can do that for finite systems too, and use it as an argument to show that their structures are made up of simple and finitely many primitives. The more interesting question is, infinite languages appear to have finite dependencies in them. So it seems that things can be finite and interesting. After all, the universe appears to be finite, but we wouldn't take all atoms in the universe as its explanation. Enter boundedness.


The Language bit

The interesting bit in linguistic theorising is that human languages exhibit limited semantic dependencies in their syntax. We would like to know why. A strong hypothesis in this respect is that languages differ only in their grammars, and an invariant combinatorics gives semantics to order, and order alone, to lead to limited constituency and dependency. Common dependencies need not be stipulated in grammars, only the language specific ones. From this perspective, infinity of languages (therefore recursion) is of secondary interest. I am beginning to think this is a stronger hypothesis than infinity (in the sense that it takes more burden of proof in its shoulders). I am in favour of testing strong hypotheses before we entertain the weaker ones. My high school best buddy told me to do that (well, more or less). I usually don't do what I'm told but I make exceptions.

De-dichotomizing a grammar is crucial for this hypothesis. The underlying idea is the notion of "possible category," as models of what Edmund Husserl called "sensibly distinct representations in the mind." Many of us believe categories need explanation, rather than stipulation. NB. these kinds of categories are knife edges: one side is syntactic, the other semantic. Any grammar need do justice to both, unless we start believing in one-edged knives. (The so-called one-edged knives, kard, culter, facon etc. are knives with one edge sharpened, since you asked).

I try to work towards a theory of grammar. Something weird happens to words in a grammar. By definition they are exceptional because they are all different, but they begin to bear combinatory categories that must be all over the grammar as a recycled resource. This resource and its recycling needs a theory. Naturally, something 'idiosyncratic' does not need a theory, so the theory we need must be about words' possible use in syntactic contexts, ie it must be about constituency. (Having said that, I do believe language is a kludge; if I wanted perfection in nature, I'd study sharks.) I guess what I'm saying is that a theory of kludge is a kludge too; it's turtles all the way down. Schonfinkel called them combinators. They are nifty kludges to give semantics to order.

We can conceive the grammar as something shaped by the invariant. The invariant can be studied semiotically (extensionally) and psychologically (intensionally). The same goes for grammar. But first we must account for Merrill Garrett's insight that ``parsing is a reflex.'' (try turning it off if you're a skeptic). The part and parcel of the strong hypothesis is that this is due to having a combinatory system that

More specifically, i am interested in how combinatory and substantive constraints shape surface syntax, and the grammatical reflex of that effect. I am also interested in interactions in components--functionally speaking--of a language system: morphology, syntax, semantics, prosody, information structure, what have you.

And, oh, before I forget: If your morphological beacon is Latin, we probably live in parallel universes.

Subscribe to ankara-linguistic-circle
Email:
Browse Archives at groups.google.com

The Cogsci bit

I believe grammar can be one of the most productive and creative tools in model construction for cognitive science once it's taken off the dusty shelves of high school, and from linguist's ivory tower. It relates directly to computation as we know it. But first a bit of how we might have gotten to that point (all guesswork, of course).

The major difference between trees and animals is that animals move and trees don't. Everything that moves has a nervous system. (Not necessarily a central system, but a nervous system.) It seems that the whole need for a central nervous system arose because things that move must coordinate their movement and actions. (If you are in doubt, try tying your shoelaces as you run). Or it could just be a serendipitous accident to give us the mother of all neurons in a single thunderstrike (or astrocytes, if you like), in which case I will close shop and worship Taranis.

The point of cognitive science is to make sense of how coordinated activity can take place with what little perceptive abilities a species have, and how task-specific knowledge can give rise to something more than the token experience. That's what David Hume suggested---well not in these words, and i'm a bit old-fashioned in this matter to leave the good words to their owners.

When things move, they must track other objects and coordinate their actions. (An inquiring mind might estimate the potential lifetime of a mouse that seems to totally ignore a curious or hungry cat.) A simple hypothesis, aka. the computationalist hypothesis, is that all kinds of coordinate action are more of the same stuff. What distinguishes the species is their resource endowment and life training (i.e. exposure to data). So maybe, just maybe, the most uniquely human cognitive trait, language, is more of the same stuff, with more resources and less training, rather than a gift to mankind or some kind of miracle. (read: the only miracle I believe in is the national lottery.) A bit of evolutionary patience might give us wonders, if you pardon the pun.

Here's a consensus list for top hundred works in cogsci: Top 100

Just to exploit the benefits of a non-representative democracy, aka. web, I publicise my own top ten+ list, for whatever it's worth:a my cogsci top ten


The computation bit

There seems to be a lot of confusion about what computation can do and must do in cognition.

Myself being one of the confused, I try to convince students (and usually fail) that just because you use a computer to model does not mean you are a computationalist. Just because you don't use a computer does not mean you are not a computationalist. (I tend to think Panini was computationalist, and ACT-R is not. ACT-R is software engineering for cogsci, much like HPSG is for linguistics.) Just because you think symbols are natural representations for the mind does not mean that we've got a Turing machine running in our heads. Some psychologists might think that's computationalism, but it isn't. (Trust me, not all of them think like that.)

Computationalism is a style of thinking which suggests that computational principles (discreteness, complexity, resource boundedness) carve the hypothesis space of higher-level cognitive processes. Easier solutions appear earlier than more difficult ones. But, easy solutions may not be enough, because we face multiple constraints in a complex life. In other words, the problem space could be general, perhaps divided into classes of problems according to their demands, but the solution is task-specific. Computational ease, difficulty and comprehensiveness are measured by complexity in time and space, automata-theoretic demands, frequency (for biased search), completeness, decidability. We've got theories about these things, which go under the names Complexity Theory, Algorithms, Automata Theory and Logic.

Two examples: 1) Suppose we have a string of n words. Suppose also that the problem is figuring out what parts of the string means what. In Quine's sense, there are infinitely many possibilities. In Siskind's sense, the possibilities are reduced to likelihoods by parsimony, e.g. exclusivity of potential meanings, cross-situational inference etc. We must also allow for the posibility that a sequence of words in the string could mean one thing, like in an idiom. Without constraints, there are O(2^n) possibilities to look at. (It is the powerset of n elements.) With Zettlemoyer and Collins constraint, that only contiguous substrings can have a pindownable meaning, the possibilities reduce to O(n^2). Any n larger than 4 can tell you why we must do something like this. Then we begin to worry about what this contiguity assumption brings to cognition, and whether it is attested elsewhere in the cognitive world. A cognitivist theory might start with assumptions like: nouns are learned first because they stand for objects and there are lots of objects around. Computationalists would say short, frequent, unambiguous, perhaps long but repetitive words are learned first because we know that these properties make the problem computationally easier. You decide.

2) Think of word learning again. This time as a language game as in Luc Steels's Talking Heads. (ok real ones are more talkative, and much better in music.) A group of agents try to learn communication, which we measure by success of building a common vocabulary in one-on-one interaction of speaker-hearer role play. Some cognitivist assumptions could be "avoid homonymy, avoid synonymy". To a computationalist, that puts the cart before the horse. We can see through simulations that homonymy and synonymy will cause unstable systems or late convergence to common vocabulary at best. They are effects, not causes. If people want to communicate (now that's a cause), and figure out that agreeing on meanings of labels is a simple way to do that, we get limited homonymy and synonymy and convergence. Do you know of a language in which homonymy and synonymy are completely absent? Why don't we avoid them completely if we're at it? Maybe that's not what we're doing, but that's what we are getting. (We conducted this experiment, which we report here.)

I guess what i seem to be saying is that functionalism is a ghost that I am trying to exorcise.

In sum, if I were a rich man, I would take AMT as my main inspiration, Allen Newell as my friendly psychologist (yes, i chose a computer guy as my psychic guide, so what?), Chomsky as the scientist, with care, minus the romanticism, much less the baroque nativism (as opposed to more sensible variety), for what seems like an inner drive for explanations, but also for public relations (he insists on computation although in a weird way, but is good for business), Montague and Curry as design engineers (nb. rigorous models), Hume as the boss (now that's a big wig, because it has to cover large territory), Husserl as the heavyweight philosopher (he's got a big beard to prove it too, unlike the islander variety, or the wrong side of the Atlantic variety--a great one: Fodor, and it's great in this business if half of what you say makes sense), Wittgenstein as the master spoiler for that dormant cognitivist lurking in all of us, and for fun (my cats talk to me; i would like to return the favour), Darwin as my gentle naysayer without academic quibbles (the man almost gave up his life's work upon a single letter from New Zealand), Lamarck as my constant reminder (he was wrong, as we will all be one day, perhaps even now, but he was a bloody good scientist), and Haj Ross as the linguist (name one syntactic nut to crack that you cannot trace back to JRR or NC). I am guessing a person of these likenesses might materialise in the 23rd century.


To the (potential) students, friends, fiend or foe who ask me whether i am an internalist, externalist, realist, mentalist, instrumentalist, eliminativist, empiricist, rationalist, physicalist, materialist, idealist, functionalist and what not, i can say this much (sorry for the presumption; i've been asked this question too many times--i don't think my answers have been cohesive or consistent): it seems that finding an interesting AND workable subset of these principles and ideas to everyone's satisfaction is an admirable task (assuming that finding an interesting OR workable subset is boring). But life's short. I wouldn't mind if the label comes later. I thought i was safe with the labels `realist', `naturalist' and `computationalist' but life's full of surprises. So I try to follow the pillow-book approach for some scientific fun. If a book of 10th century can tell us so much about life just by listing people, birds, dogs, trees, rain, snow etc., I'd like some of that.


Some surprising facts about the mind

  1. To door or not to door
  2. Bikes: can, can't
  3. Scaffolding the bike recovery
  4. Born to be Welsh
  5. From echolalia to grammar
  6. The crow mind
  7. The chimp mind
  8. Mind: your own business

I blame Beckett for legitimising schoolboy humour in public places (and these guys). Actually I blame Sam for everything, because he's dead.

a. OK I confess. I was asked by a student.^


Cognitive Science Department
Informatics Institute
Middle East Technical University
06800 Ankara, Turkey