Awakening From the Meaning Crisis by John Vervaeke, Ep. 28 — Convergence To Relevance Realization (Summary & Notes)

Mark Mulvey
6 min readOct 5, 2021

“You can’t say everything you want to convey. You rely on people reading between the lines. That is actually what this word means. One of the etymologies of ‘intelligence’ is inter-ledger, which means: to read between the lines.

(In case you missed it: Summary & Notes for Ep. 27: https://markmulvey.medium.com/awakening-from-the-meaning-crisis-by-john-vervaeke-ep-27-problem-formulation-summary-notes-9697835e8a47)

Ep. 28 — Awakening from the Meaning Crisis — Convergence To Relevance Realization [54:27] https://youtu.be/Yp6F80Nx0lc

  • As we start to understand ideas of “intelligence” converging on the idea of being a General Problem-Solver, we’re also seeing strands of ideas converging on Relevance Realization as being what makes us General Problem-Solvers.
  • “Your ability to categorize thing massively increases your ability to deal with the world.” to make predictions, extract potentially important information, to communicate…
  • A category is not just a set of things, it’s a set of things that you sense belong together. How is it that we categorize things? We may not be able to answer that fully, but this notion of Relevance Realization is at the center of it.
  • What does ‘similarity’ mean in the logical sense? Partial identity. Sharing features. “Kinda the same.” The philosopher Nelson Goodman says if you agree with that then you now have a problem: any two objects are logically, overwhelmingly similar. (e.g. a bison & a lawnmower. Both found in North America, neither was found in North America 300 million years ago, both contain carbon, both can kill you if not properly treated, both have an odor, both weigh less than a ton, neither one makes a particularly good weapon… this list becomes explosively large.)
  • You might say “Yeah, but those aren’t the important properties, you’re picking trivial ones…” but notice what you’re doing: you’re telling me I haven’t zeroed in on the relevant properties. The ones that are obvious to you, and stand out to you as salient. You’re shifting from a logical to a psychological account of similarity.
  • What matters for psychological similarity is not for any true comparison, but finding the relevant comparisons. (This same thing happens when you decide two things are different)
  • Some people then invoke Darwin and point not to abstract concepts but concrete survival situations, that we select relevant information based on what helps us avoid danger etc. But what set of features do all dangerous things share? Holes are dangerous, bees are dangerous, poison, knives, lack of food… what do all these things share?
  • Let’s say we were building a sophisticated robot or machine— an agent that can determine the consequences of its behavior and change its behavior accordingly. And we give it a problem: we give it a wagon with a handle, and on it is a battery. And much like humans or animals who acquire food, the robot is inclined to take the battery elsewhere before consuming it. But here’s where we introduce the problem: alongside the battery in the wagon is a lit bomb. The robot decides to pull the handle and bring the battery along (because it has determined that that is the intended effect of pulling the handle), but the bomb eventually goes off and destroys the robot. What did we do wrong?
  • We only had the robot look for the intended effects of its behavior, we didn’t have it look for side effects.
  • People do this, they forget or fail to check side effects. e.g. they enter a space where they known flammable gas is diffuse, but it’s dark, so they strike a match because they want the intended effect of making light, but it has the unintended effect of making heat, which sets off the gas and harms or kills them.
  • So we give the robot more computational power (more sensors, etc.) so it can account for side effects too. We also add a black box so we can see what’s going on inside the robot. But when we put it back in front of the wagon it doesn’t do anything, nothing happens. Why?
  • The robot is determining all the possible side effects. If it pulls the handle it will make a squeaking noise, and if it pulls it the right wheel will turn a certain amount, same with the other wheels, and because of a skew in the axle there will be a slight wobble, and the grass underneath the wheels will be indented, and the position of the wagon with respect to Mars is being altered… in other words: the number of side effects is combinatorially explosive.
  • So we come up with a definition of relevance. (though Vervaeke will later argue that that is impossible, which will be crucial for understanding our response to the meaning crisis) Let’s say we give the robot this definition for relevance we created. but it still goes up to the wagon with the battery and the bomb and just sits there calculating. When we look inside the black box we see it’s been making two lists — relevant vs. irrelevant — and it’s checking everything and filing it under irrelevant.
  • In reality, what we’re doing as humans isn’t filing things into relevant and irrelevant, we’re ignoring (somehow) what’s irrelevant and just zeroing in on what’s relevant.
  • (This whole thing re: the problem of the proliferation of side effects in behavior. a.k.a. The Frame Problem, and even if you get past it you’re left with this subsequent problem of having to file everything into relevant vs. irrelevance which is known as The Relevance Problem.)
  • H.P. Grice pointed out that you always seem to be conveying much more than you’re saying. Communication relies on this fact. Otherwise a simple request (“Excuse me” or “I’m out of gas”) would require an impractically long set of detailed explanation to make clear all the assumptions that are bundled into these short simple phrases. (If you say “I’m out of gas” and someone comes up and blows some helium into your car window that’s annoying at best.)
  • “You can’t say everything you want to convey. You rely on people reading between the lines. That is actually what this word means. One of the etymologies of ‘intelligence’ is inter-ledger, which means: to read between the lines.
  • We don’t actually demand that people speak the truth (if we did we’re screwed, because most of our beliefs are false). We’re actually asking people to be honest or sincere. What people believe to be true.
  • But are they supposed to say everything that they believe to be true? To convey everything that’s in their mind at a given moment? No, that would be asking for another combinatorial explosion. What we mean is: convey what is relevant to me, the conversation, and the context.
  • “The key of your ability to communicate is your ability to realize relevant information.”
  • Selective attention is another thing that is doing relevance realization, and then to hold that in working memory. And you use working memory to solve problems — to deal with the combinatorial explosion in the problem space. And all of that is interacting with the proliferating of side effects (which we saw with the robot example). And then you have to organize ALL of that into your long term memory and access it. All of these different aspects of intelligence are interacting with one another. All of this is ‘The Relevance Problem,’ and is the core of what makes you intelligent.
  • What if when we’re talking about “meaning” we’re talking about how we find things relevant to us. To each other. To part of ourselves, how we’re relevant to the world and how it’s relevant to us…
  • “All this language of connection is not the language of causal connection, it’s a language of establishing relations of relevance between things.”

Up Next: Awakening From the Meaning Crisis by John Vervaeke, Ep. 29 — Getting to the Depths of Relevance Realization (Summary & Notes) https://markmulvey.medium.com/awakening-from-the-meaning-crisis-by-john-vervaeke-ep-aebe5706b84c

--

--

Mark Mulvey

Arts • Investing • Games • Tech • Philosophy • Bitcoin | markmulvey.com