Indiana University Bloomington

FARG Research Overview

The Fluid Analogies Research Group was launched roughly 30 years ago with the Seek-Whence, Jumbo, and Copycat projects, the goals of which were (and still are) to make accurate computational models of the most fundamental mechanisms of human thought. Since that long-gone day, a fair number of other projects have been added to the list. What all the FARG projects have in common is their incessant focus on two profound and totally inseparable issues: (1) What is a concept? and (2) How does analogical thinking take place?

The philosophy behind our vision of thinking is that the crucial task of brains is, figuratively speaking, to “put their fingers on the essence of the situations facing them”, and that this is done by what we call “high-level perception”. That means that we FARGonauts (as we, with deliberate humor, call ourselves) see a smooth continuum running from, at the lower end, the recognition of the color red, to the recognition of a red apple on a brown table, to the recognition of breakfast on a countertop, to the recognition of a mess in the back seat of our car, to the recognition of a mess in a friend’s romantic relationship, to the recognition of the vast web of implications of a friend’s potential divorce, to the recognition of the profound irony of the fact that the surgeon is now dying of the very disease that she herself cured so many times in her life, and so forth and so on. Needless to say, the latter examples get closer and closer to the “high end” of the spectrum of high-level perception.

Our belief is that this kind of “seeing” (which obviously transcends any traditional “sensory modality”, such as vision or hearing or touch) is the core of human thought, and that it is carried out by the mechanisms of analogy-making. We very deliberately avoid saying “analogical reasoning”, because to us that extremely standard, traditional expression is loaded with all sorts of unwanted and misleading connotations. To us, the making of analogies means nothing more and nothing less than recognizing in something “before us” (not necessarily before our eyes, however) what it most centrally “is”. This means making a link between two mental structures, one being the imperfect, crude representation that we have (so far) built up of this situation, and the other being a pre-stored mental representation of another situation from our past (or, just as often, if not much more often, a pre-stored mental representation of a bunch of situations from our past — which is to say, a known concept). We FARGonauts do not draw any distinction between a memory of one event or situation, and a memory of a number of similar situations (i.e., a concept) — in fact, we see a memory of one event or situation as constituting every bit as much a genuine concept as is the blurry superposition of a hundred similar situations (which seems more like the traditional notion of what a concept is, although sometimes such a superposition is called a “schema”, a term that to our eyes muddies up the waters considerably).

In short, the FARG philosophy is that analogy-making is the core of cognition. Analogy-making brings all of our concepts into existence, it continually broadens them and deepens them and sharpens them over our lifetimes, and thanks to analogy-making, we recognize new situations as being “instances”, in a sense, of old situations. Thus we FARGonauts see analogy-making as happening in a brain not once a week or once a day or once an hour or once a minute, but as happening many times per second. Analogy is thus, for us, truly the core mechanism of cognition.

We study this remarkably subtle and ubiquitous mechanism of cognition by looking at how it works in tiny microworlds that we design specifically to bring out its deepest, subtlest, most elusive aspects. These microworlds range from the Coypcat alphabetic microdomain (“If abc changes to abd, then what does xyz change to?”) to the Seek-Whence numerical microdomain (“What is the most likely pattern lurking behind the scenes of the infinite sequence of integers that starts out ‘2, 1, 2, 2, 2, 2, 2, 3, 2, …’?”) to the Letter Spirit artistic-style microdomain (this involves trying to understand the abstract “spirit” of “gridfonts” — the spirit that pervades the 26 lowercase letters of the alphabet, restricted to a relatively small grid of vertical, horizontal, and diagonal straight-line strokes, all designed by a human being with the goal of creating a uniform artistic style) to the Phaeaco Bongard-problems microdomain (the Russian computer scientist M. Ya. Bongard, in the 1970’s, designed a set of 100 visual pattern-recognition puzzles that are remarkably subtle, delightful, and playful, and Phaeaco carries out vision at many levels of abstraction in order to solve such puzzles, sometimes doing well and sometimes flopping hilariously) to the geometric-creativity microdomain of George (a program that tries to discover new concepts and to make new hypotheses in the domain of triangle geometry in the Euclidean plane). There have been several other FARG projects, but this list gets across the flavor.

We are staunch believers in the idea of using microworlds to study cognition, and the ideas behind our models are informed by many sources, ranging from biological metaphors (the brain is like an ant colony, thinking is like the parallel activity of enzymes in a single cell), to brain research, to the study of error-making, to the careful study of words and their halos, to the observation of our own smallish acts of creativity in various areas of life, to the study of how analogies have pervaded the greatest creative leaps made by physicists and mathematicians.

Lastly, a tiny comment on the philosophy behind FARG computer models. All of them are based on the idea that thinking is an extremely parallel, emergent phenomenon, as opposed to some kind of set of precise computational rules for manipulating abstract meaning-bearing symbols. In other words, we don’t see thinking as any kind of “logic” or “reasoning”, but as a kind of churning, swarming activity in which thousands (if not millions) of microscopic and myopic entities carry out tiny “subcognitive” acts all at the same time, not knowing of each other’s existence, and often contradicting each other and working at cross-purposes. Out of such a random hubbub comes a kind of collective behavior in which connections are made at many levels of sophistication, and larger and larger perceptual structures are gradually built up under the guidance of “pressures” that have been evoked by the situation. None of this activity is seen as being deterministic; rather, our models are all pervaded by randomness or “stochasticity”, to use a fancier term for the same idea. That is, each run of any FARG program will be different from the next run, even if the program is facing exactly the same situation. The pathway it follows will be totally different, if looked at on the most fine-grained level, although if one steps back from the trees to see the forest — that is, if one looks only at the very high-level (coarse-grained) behavior of the program — it may be that two runs are completely identical at that level of description. (Notice that observing and describing a program’s high-level behavior is in itself an act of very high-level perception.)

As we have moved from FARG’s early days, 30 or more years ago, into our more “mature” phase, we have come to be increasingly focused on having our models be able to look at their own (forest-level, not tree-level!) behavior. In other words, we have felt an increasing need to have our programs’ own behavior become part of the microdomain that they are able to perceive and “think about”. This is a very difficult challenge, and although we have been working hard at it for at least a decade or more now, we are still only at the very beginning of it.

One last word. Although we FARGonauts devise, implement, test, and revise computer models of thought processes, we do not consider ourselves to be carrying out artificial-intelligence research. The reason for this is that we are only trying to understand what human minds do; we build our models not in order to make computers “smarter” but in order to understand more clearly the huge gulf that lies between computers in their standard incarnations and human minds. Thus whereas many AI researchers want to make programs that avoid errors like the plague, we are interested in precisely the opposite. We are delighted when our programs make clumsy, stupid errors — we rejoice in such “fluidity”. Indeed, we want our programs to be able to be confused, blurry, totally lost, and frustrated. We hope, one day, that our programs might have just the barest glimmerings of a sense of humor, and in the midst of their own confused flailings, would be able to recognize how pathetic are their efforts, and to laugh at themselves. That would be a happy day for us FARGonauts.


Doug Hofstadter's involvement, past and present, in computational models of cognitive processes:

  • Jumbo (1981–1983): a computational model of the human ability to make plausible anagrams from a given set of letters.
  • Seek-Whence (1981–1986): a computational model of the perception and extrapolation by humans of linear patterns. Collaboration with Marsha Meredith.
  • Copycat (1983–1990): a computational model of high-level perception and creative analogical thought in a microdomain whose basic elements are letters and strings of letters. Collaboration with Melanie Mitchell.
  • Numbo (1987): a computational model of creative analogical thought in a microdomain whose basic elements are letters and strings of letter. Collaboration with Daniel Defays.
  • Tabletop (1984–1991): a computational model of creative analogical thought in an idealized version of a familiar situation. Collaboration with Robert M. French.
  • Letter Spirit (1983–2000): a computational model of the perceptual and conceptual processes involved in generating a set of letters sharing an artistically uniform style. Collaboration with Gary McGraw and John Rehling.
  • Metacat (1993–1998): a computational model of high-level perception and analogy-making that deepens Copycat by bringing in episodic memory and self-monitoring. Collaboration with James Marshall.
  • Phaeaco (1995–2005): a computational model of the processes lof vision and abstraction involved in solving “Bongard problems” (a set of visual analogy problems devised by M. M. Bongard and others). Collaboration with Harry Foundalis.
  • SeqSee (2003–2009): a project to model the human faculty of perception and extrapolation of linear patterns. Collaboration with Abhijit Mahabal.
  • George (2002–2009): a computational model of the visual imagination and the discovery process in Euclidean geometry. Collaboration with Francisco Lara-Dammer.
  • Musicat (2004–present): a computational model of human melodic perception. Collaboration with Eric Nichols.

Other Current Projects:

  • Computational models of conceptual fluidity, analogy-making, and creativity.
  • Verse translation (mostly into and out of French and Italian; occasionally other languages).
  • Error-making, especially in speech, as a window into mental processes (collaboration with David Moser, Greg Huber, and Emmanuel Sander).
  • Discovery, insight, clarity, and understanding in mathematics (especially in geometry, group theory, and Galois theory).
  • Order and chaos in a family of meta-Fibonacci recurrence relations (collaboration with Greg Huber).
  • The central role of analogy in physics.
  • Theorizing on general mechanisms of creativity, with ideas coming from personal experience in such activities as the invention of bon mots, the composition of musical pieces, the writing of poems, articles and books, the translation of wordplay, and certain constrained forms of artistic design.
  • Drawing of ambigrams, especially in coordinated sets (e.g., the ongoing book project “Capitals in Capitals”, consisting of ambigrams on the names of all fifty American state capitals, all done by mirror reflection and in capital letters).