Awakening From the Meaning Crisis by John Vervaeke, Ep. 27 — Problem Formulation (Summary & Notes)
“What’s actually missing in an ill-defined problem is how to formulate the problem. How to zero in on the relevant information and constrain the problem so you can solve it.”
(In case you missed it: Summary & Notes for Ep. 26: https://markmulvey.medium.com/awakening-from-the-meaning-crisis-by-john-vervaeke-ep-26-cognitive-science-summary-notes-8b12fe0d6075)
Ep. 27 — Awakening from the Meaning Crisis — Problem Formulation [54:46] https://youtu.be/9j5O-tnaFzE
- Two things to note about the problem space diagram. 1.) all the possible paths haven’t been drawn out (and this was on purpose), and 2.) it’s misleading because it’s been created from a God’s-eye point of view. Having a problem means to be acting from the POV of the initial state, not above things. You’re ignorant of the path that will get you to the goal state.
- You might then say “So what? Why have this diagram?” You can use the diagram to calculate the number of pathways: F^D (F is the number of operators at each state, raised to the D power with D being the number of stages you go through.)
- This works when analyzing a chess game, for example, and the number of pathways is 30⁶⁰. This is called combinatorial explosion. This is an astronomically huge number. It’s greater than the number of atomic particles that are estimated to exist in the known universe. This means you cannot search the whole space.
- Instead, our brains will begin searching in a tiny subsection of the whole space and will often find a solution. (Vervaeke calls this aspect of reality his professional obsession because it’s so fascinating) You’re able to immediately zero in on the relevant information. Relevance realization.
- How do we do this? Even the fastest chess-playing computer can’t check the whole space. How we avoid this combinatorial explosion is a central way of understanding intelligence. Part of what’s involved is the generation of obviousness, but how does your brain make things obvious to you? You’re constantly restructuring what you find relevant and salient.
- In many ways this is the key problem that AGI research is trying to address right now. There has also been a wrestling with the distinction (partly due to initial work by Polya in a book called How To Solve It) between a heuristic and an algorithm.
- An algorithm is a problem-solving technique that is guaranteed to find a solution or prove that a solution can’t be found. Since it relies on ideas of certainty, there’s a problem with this: in order to be certain you have found the answer or that one can’t be found, how much of the problem space do you have to search? Well, to guarantee certainty you have to search all of it.
- Deductive logic is also algorithmic, and works in terms of certainty. So does math. Which means you cannot be comprehensively logical. “Trying to equate rationality with being logical is absurd.” Rational (Note the etymology: ratio, rationing…) means knowing when, where, and how much, and what degree to be logical. Which is a much more difficult thing to do.
- A heuristic is a problem solving technique that is not guaranteed to find a solution, but is reliable for increasing your chances of achieving your goal. They work by trying to prespecify where you should search for the relevant information. This is what makes heuristics a sort of bias. They bias where you’re paying attention.
- This is known as the “no free lunch” theorem: it’s unavoidable, you have to use heuristics in order to avoid combinatorial explosion, and the price you pay is falling prey to bias. “Again: the very things that make us adaptive are the things that make us prone to self-deception.” (We’ll get into deep reasons later why heuristics alone is insufficient.)
- Much of what has been discussed so far is due to the work by Newell and Simon, who help teach us what it is to do good cognitive science. But look at what they’re doing: they’re taking a complex phenomenon and trying to analyze it down to its basic components. And like Descartes they’re trying to formalize it — give us a graphical, mathematical representation. This let’s us do calculations, equations, etc. And then they were trying to mechanize the analysis in a form that can carry it out.
- In trying to explain the mind we often fall into a particular fallacy: infinite regress. Circular explanations. So what Newell and Simon are trying to take a mental term (“intelligence”) and try to explain it using non-mental terms (analyze-formalize-mechanize). This exemplifies the scientific method. This gets at what Vervaeke calls the naturalistic imperative.
- “If I’m always using mind to explain mind, I’m actually never explaining the mind. I’m just engaged in a circular explanation.”
- If cognitive science can give a synoptic integration by creating plausible constructs (like what Newell & Simon are doing) then it creates the possibility of making us finally be part of the scientific worldview — “not as animals or machines, but giving a scientific explanation of our capacity to generate scientific explanation.”
- Now for some critiques of Newell & Simon. Their notion of heuristics, while necessary, is insufficient. They failed to recognize other ways in which we constrain the problem space and zero in on relevant information in a dynamically self-organizing fashion. They failed to notice that they had an assumption: that all problems are essentially the same.
- This is kind of ironic. We have a heuristic of essentialism: that when we group a bunch of things together with a word they must all share some core properties. An essence. Some things fall into this (“triangles” for instance all share an essence, certain features), but not everything we group together has an essence (Wittgenstein pointed this out).
- Wittgenstein used the example of games. We call many things “games.” Not all involve competition, or other people, or imagination, or pretense… you won’t find a definition that includes all and only games. Many categories don’t have an essence. Essences allow us to generalize though, which is why we look for them. And generalizations can help us make very good predictions.
- Newell & Simon thought that all problems are essentially the same, which means they only needed to find one essential problem-solving strategy, and that how you formulate a problem is therefore trivial. Essentialism isn’t a bad thing, but in this case Newell & Simon were wrong. All problems are not essentially the same.
- There are different kinds of problems. A central one is the distinction between well-defined problems and ill-defined problems. The example of 33+4 is a well-defined problem. Since our education is full of well-defined problems we tend to think this is what most problems are like, and that means we don’t pay attention to how we formulate the problem. Most of our problems are actually ill-defined problems, where we don’t know what the relevant information about the initial state (or goal state) is, or what the relevant operators are. Or even what the relevant path constraints are.
- e.g. here’s a problem: Take good notes. Ask yourself: could you program a machine to do this for you? How do you take good notes? What are they? How do you know what to write down? Which information is relevant and worth writing down and which isn’t? Etc. (Other examples of ill-defined problems are “following a conversation,” or “tell a joke,” or “go on a successful first date”)
- “What’s actually missing in an ill-defined problem is how to formulate the problem. How to zero in on the relevant information and constrain the problem so you can solve it.”
- What’s missing is good problem formulation, which involves zeroing in on relevant information. Relevance realization.
- Simon eventually realizes something later after an experiment he does with Kaplan in 1990 known as the “mutilated chessboard” experiment. How many dominoes do you need to cover the chessboard?
- Now what if you mutilate the chessboard. How many dominoes do you need to cover the board now? (with no overlap or overhang)
- Many people find this a hard problem because they formulate it as a covering problem. They try to imagine the board and employ a “covering strategy.” But this strategy is combinatorially explosive. But you can reformulate it: you know that you need an equal number of black and white squares in order to cover the whole board (a domino has to cover one black and one white square, always), but the two squares that have been removed were both white which means you no longer have an even number of black and white squares, which means you can prove that it’s impossible to cover the board with the dominoes. (This is known as a “parity strategy”)
- A parity strategy makes the two white removed squares salient to you, and then makes the fact that the task is impossible obvious to you.
- This capacity to come up with good problem formulation that turns ill-defined problems into well-defined problems — from a self-defeating strategy to a productive one — is insight. (In fact, the title of that experiment was “In Search of Insight”) This is why insight, in addition to logic, is essential to rationality.
Up Next: Awakening From the Meaning Crisis by John Vervaeke, Ep. 28 — Convergence To Relevance Realization (Summary & Notes) https://markmulvey.medium.com/awakening-from-the-meaning-crisis-by-john-vervaeke-ep-28-convergence-to-relevance-realization-b1364e2c2c81
List of Books in the Video:
- G. Polya — How To Solve It