Writing, Reading, Looking, Convolving
Imagine you’re on a train. You pass the graffitied backs of warehouses, front-loaders parked in a snowy lot, a red-brick-and-white-wood apartment building, an electric company grid station, and then trees, a lake, and snow-dusted hills in the distance. If you turn your head, you can turn your attention back and forth to the window of the train, the passengers inside, and then back out to the changing landscape.
Trees flash by in the foreground, but I choose not to pay attention to them and instead to direct my attention to cows which stand, quite still, around a feeding trough. As we pass, I turn to look at them. They’re motionless. What were they doing? Were they actually real or was that some kind of art installation?

This is what we do as we gaze out the window of a train.This is also what writers do. They direct our attention. Part of the pleasure of reading a book or watching a movie or TV series is that our attention is directed. It is directed from the humdrum of daily life to lives of other people who are generally doing things more interesting than we do — but which we get to experience, minus potentially catastrophic consequences — like, say, dying in a firefight or having your head chopped off, or going to a watery grave in the Titanic. In the hands of a skilled writer, our attention is guided, directed, where it lingers for a time, before being moved to something else.
This is not dissimilar to the process of reading, or, for that matter, listening to someone speak. We take information in packets and assemble it as we go. Both are passive. We sit and information comes to us. Reading, or listening, receiving packets of information and making sense of them as we assemble them, is similar. There should be a word for this. The closest one is probably convolving. It’s a kind of scanning of objects near and far, zooming in on Buttercup and ignoring her surroundings, then backing out and seeing her as part of a larger landscape, looking over the landscape in the foreground, then going to the background, etc.
The fundamental truth is that language emerges from being in the world. Because this is missing from a computer, all attempts to get at meaning and understanding by stochastic methods are, if not doomed, then severely crippled. This is also why ‘computers don’t understand a thing.’ Quite apart from the uninteresting philosophical debate about whether a computer can really ‘understand’, there is a very practical shortcoming: the computer is not being given the material it needs. It does not ‘understand’ because there is no relation between the words it is analyzing, and the way that we humans understand those words. There is no relationship between the words, and the world.
The next best thing to having a computer look like a human and eat like a human and talk like a human and speak like a human would be to provide it with a way to recognize the concepts that words are expressing. In data science terms, that is, to provide the neural network with target concepts, aka labels. It needs some way to know when it is hitting the target idea. That’s what the label is, in this context: the thought, the concept, the idea. It’s the thing we’re passing back and forth when we chat, like a tennis ball.
Except that it isn’t quite a tennis ball. It grows and shrinks and morphs and changes.
To summarize:
- Meaning arises out of repetition.
- We live in a world of instances, but we perceive a world of classes.
- Repetition in time is the equivalent of repetition in space.
Why is this important?
Because what I am describing is a way of defining meaning in a testable way. If all meaning is derived from repetition, then something repeated, and repeatable, must be what we are recognizing. There can be no generalization without repetition. You always generalize from one thing to another.