Awakening From the Meaning Crisis by John Vervaeke, Ep. 29 — Getting to the Depths of Relevance Realization (Summary & Notes)
“Relevance realization has to be happening both at the feature level and the gestalt level, in a highly integrated, interactive fashion.”
(In case you missed it: Summary & Notes for Ep. 28: https://markmulvey.medium.com/awakening-from-the-meaning-crisis-by-john-vervaeke-ep-28-convergence-to-relevance-realization-b1364e2c2c81)
Ep. 29 — Awakening from the Meaning Crisis — Getting to the Depths of Relevance Realization [56:08] https://youtu.be/A6Q_B7z6gLc
- “What I want to try to show you now is how you might move towards trying to offer a scientific explanation of relevance — what that would look like, and the difficulties you would face doing so.”
- What should a good theory of relevance do? What kinds of mistakes should we avoid when trying to explain relevance? The main mistake: arguing in a circle. Whatever we come up with to explain relevance cannot presuppose relevance for its function.
- One candidate for explaining relevance: representations. That there are things in the mind (ideas, pictures, etc.) that stand for or represent the world in some way. Another candidate: computation. That it’s really a function of computational processes. Another candidate: modularity. That there is a specific area of the brain dedicated to processing relevance.
- Representations (this is a point John Searle has made) are aspectual. When you form a representation of an object in your mind you do not grasp all the true properties of that object, because the number is combinatorially explosive. Of all the properties you just select a subset. Which subset? Properties that are(wait for it:) relevant to you, and structuring them as co-relevant to each other. So aspectuality deeply presupposes your ability to zero in on relevance. This means representations cannot be the causal origin of relevance.
- Zenon Pylyshyn did some interesting work on something called multiple object tracking. His studies show people can track about 8 different objects at one time, reliably. What’s really interesting: the more objects you track, the less and less features you can attribute to each object. Their content properties start to get lost (e.g. You may be tracking a red X amidst a group of 7 other objects and reliably note its location but not notice that it has changed into a blue square.) You can track the hereness and nowness of it (“where is it?”), but everything else — shape, color , categorical identity— all get lost. (Pylyshyn calls this FINSTing. Fingers of Instantiation.)
- When physically touching an object, and therefore getting a sense (“in touch”) of its hereness and nowness, we make an object salient to ourselves. This is called salience tagging. Making this thing or that thing salient.
- The terms “this” or “that” are linguistic terms, but technically they are what’s called demonstrative references. Salience tagging, then, is what Vervaeke calls enactive demonstrative referencing, and is required before doing any kind of categorization. “If I’m going to categorize things I need to mentally group them together.” This means relevance sits below this representational (i.e. semantic, or how your words refer to the world) — level.
- (So far this is all consistent with reports of higher states of consciousness across cultures and through time, where people describe being in an eternal state of hereness and nowness and that its very nature is inexpressible, ineffable, and can’t be put into words.)
- Maybe the computational level can do a better job of explaining relevance realization for us. In the same way representations were about semantics, computation is at the syntactic level. Syntax is about how a series of terms have to be coordinated together in some system. In language this refers to the grammatical rules.
- One of the original defenders of the computational mind was Fodor, but he also had an important criticism. He pointed out that you have to make a distinction between implication and inference. Implication is a logical relationship (based on syntactic structures and rules) between propositions. An inference is when you’re actually using an implication relation to change your beliefs. And the thing out beliefs is that they have content. Why does this matter? Because changing beliefs brings up the question of: what beliefs should I be changing?
- The issue we have (which was also pointed out independently by Cherniak) is that the number of implications is combinatorially explosive. You can’t be comprehensively logical. You always have to select which sets of propositions are going to be used in an inference. It’s a kind of cognitive commitment. Which of these implications are you going to commit to, and commitment matters because it’s an act that makes use of your precious and limited resources of attention, memory, time, and metabolic energy.
- “According to Cherniak, what makes you intelligent as a cognitive agent is that you select out of all the possible implications the relevant ones, because those ones are relevant to the context because they are going to affect the beliefs that you have already done relevance realization on as applying to this situation or representing this situation well.”
- Logic isn’t just the implications, it’s the rules governing the implications. How do rules work? Well, Brown and others have made clear that rules are propositions that tell you where to commit your resources.
- The problem is, every rule requires an interpretation. “Every rule requires a specification for its application.” e.g. “Be kind.” This is a rule many of us have, but we are kind to friends in a different way we are kind to our parents, or spouses, or strangers, etc. “I cannot specify all the conditions of the application of the rule in the rule, because the rule always has to convey much more than it can say. If I try to specify it in the rule then the rule will become unwieldy because it will become combinatorially, explosively large.” (And you can’t just put in a higher order rule — a rule on how to use this rule — because the same problem happens.)
- To follow a rule relies on something we have that Brown calls the skill of judgment. This takes us out of the propositional language of a rule to the procedural language of a skill. The skill of judgment is the skill of relevance realization. The propositional always depends on the procedural (This was one of Wittgenstein’s arguments)
- When you’re exercising a skill it depends on your situational awareness, i.e. your perspectival knowing and your ability to do salience landscaping (foregrounding what’s relevant, backgrounding what’s irrelevant, adjusting accordingly to make skills more adapted and fitted, etc…) So your procedural knowing depends on your perspectival knowing. And your perspectival knowing ultimately depends on how well the agent and arena fit together and generate affordances of action. i.e. perspectival knowing depends on participatory knowing.
- propositional depends on procedural depends on perspectival depends on participatory
- Finally we get to the third candidate for explaining relevance, modularity. This depends on a “central executive” function in the brain, but this obviously won’t work because that would in itself depend on relevance realization. We’ve just pushed the problem back. “Relevance realization has to be happening both at the feature level and the gestalt level, in a highly integrated, interactive fashion.”
- Our account of relevance realization has to be completely internal, meaning: it has to work in terms of goals that are at least initially internal to the brain and emerge developmentally from it. Autopoetic systems, with scale-invariant processes that occur at multiple levels simultaneously, that are self-organizing such that they are capable of insight. Self-correction.
- We hit a problem here. Next time we will go into the following argument: that we cannot have a scientific theory of relevance, and that this tells us something very deep about the nature of relevance and of meaning. But that just because this is true, it doesn’t preclude us from having a theory of relevance realization.
Up Next: Awakening From the Meaning Crisis by John Vervaeke, Ep. 30 — Relevance Realization Meets Dynamical Systems Theory (Summary & Notes) https://markmulvey.medium.com/awakening-from-the-meaning-crisis-by-john-vervaeke-ep-b7155e7a09bd
List of Books in the Video:
- Harold Brown — Rationality
- Christopher Cherniak — Minimal Rationality
- Zenon Pylyshyn — Things and Places: How the Mind Connects with the World
- Dan Sperber — Relevance: Communication and Cognition