5 confusion isn’t going away

  1. There’s probably no broad tendency toward the elimination of confused thinking, despite us becoming less confused about any particular “finite question”; this has to do with our interests growing with our understanding/deconfusion.1(or family of questions). We could call it “the convergence question” or “the compactness question”.
  2. This really depends on the way we’re measuring confusion (or its elimination), and different criteria could make sense for different purposes and give genuinely different verdicts, so I’ve been somewhat provocative here.2
  3. But I think this is probably true when we measure confusion around questions we’re interested in (with the caveat that there can still be multiple sensible choices giving different answers here depending on what we’re more precisely interested in, so I’m still being slightly provocative).
  4. One consideration here is that we’re drawn to places where we’re still confused, because that’s where there are things to be worked out better. There’s a messy confused frontier where we will hopefully be operating indefinitely (or at least until we’re around, and if/when we aren’t around anymore, other minds will be operating indefinitely at this frontier (until minds are around)).
  5. Another consideration is that mathematics and (relatedly, not independently) technological development provide an infinite supply of interesting things for us to be confused about.
  6. One more (related) consideration is that when we’re trying to (eg) prove some theorem or to build some technology, we’re likely to still be confused about stuff around it and about how to think well around it, because otherwise we’d already be done with it.
  7. One could try to work on a project of reorganizing thought which aims to push all confusion/ambiguity into probabilities on some hypothetical clear language, but such a project can’t succeed in its aim,3 because there are probably many interestingly different ways in which it is useful to be able to be confused.[Incidentally: I’m clearly confusedly failing to distinguish between different kinds of confusion in this note :).][For example, it is probably good for many purposes to employ an “ecological arsenal” of concepts — in particular, to have concepts be ready to “evolve”, “take on new responsibilities/meanings”, “carry weight in our (intellectual) pursuits in new ways”, “enter into new relations with each other”. Maybe this looks weird to you if you are used to only thinking of words/concepts as things which are supposed to somehow just refer; I suggest that you also see how words/concepts are like [technological components]/[code snippets]/tools which can [make up]/[self-assemble]/[be assembled] into (larger) apparatuses/programs/thoughts/activities — given this, having a corps of dynamic concepts might start to look sensible.]4 You could probably pull off making an advanced mind with a clear structure where only a particular kind of confusion seems to be explicitly allowed, but to the extent that such a mind gets very far, I expect it will just be embedding other ways to think confusedly inside the technically-clear structure you technically-[forced it to have].
  8. All this isn’t to say that we shouldn’t be trying to become less confused about particular things. I think becoming less confused about particular things (richly conceived) is a central human project (and it is probably a central endeavor for ~all minds)!
  9. More generally, I doubt thinking is or is going to become very neatly structured — I think it’ll probably always be a mess, like organic things in general.
    1. To consider an example somewhat distinct from confused thinking: will there be a point in time after which thinking-systems are (or the one big world-thinking-system is) partitioned into distinct thinking-components, each playing some clear role, fitting into some neat structure? I doubt there will be such an era (much like I doubt there will be such an era for the technological world more broadly), for one because it seems good to allow components to relate to other components in varied and unforeseen ways5 (in particular, it is good to be able to make analogies to old things to understand new things; if we look at making an analogy from the outside, it is a lot like putting some old understanding-machinery to a new use).6 So, thinking-structures will probably (continue to) relate to other thinking-structures in a multitude of ways, with each component playing many roles, and probably with the other components setting the context for each component, as opposed to there being some separate structure above all the components.
    2. That said, certainly there will also be many clean constellations of components in use (like current computer operating systems). Moreover, there are plausibly significant forces/reasons pushing toward thought being more cleanly organized — for instance: (1) cleaner organization could make thought easier to understand, improve, redeploy, control; (2) if some evolution-made thinking-structures in brains get replaced by ones which are more intelligently designed at some point, those would plausibly be made “in the image of some cleaner idea(s)” (compared to evolution’s design, and at least in some aspects). It seems plausible that these forces would win [in some “places”]/[for/over some aspects of thought]. I’d like to be able to provide a better analysis of what the forces toward messiness and the forces toward order/structure add up to in various places — I don’t think I’m doing justice to the matter here. In particular, I’d like to have a catalogue of comparisons between evolution-made and human-made things meeting some specifications.7 I’d love to even just have a better catalogue of the [forces toward]/[reasons for] messiness and the [forces toward]/[reasons for] order/structure (not just in the context of thinking).
    3. Generally, I’d expect eliminating messy thinking-systems to be crippling, and eliminating clean systems to also be crippling. (I’d also expect eliminating confused thinking to be crippling, and eliminating rigorous/mathematico-logical speaking to be crippling.)

onward to Note 6!


  1. Here’s some context which can hopefully help make some sense of why one might be interested in whether confusion is going away (as well as in various other questions discussed in the present notes). You might have a picture of various imo infinite endeavors in which pursuing such an endeavor looks like moving on a trajectory converging to some point in some space; I think this is a poor picture. For example, this could show up when talking of being in reflective equilibrium or reflectively stable, when imagining coherent extrapolated volition as some sort of finished product (as opposed to there being a process of “extrapolation” genuinely continuing forever), when talking of a basin of attraction in alignment, when thinking of science or math as converging toward some state where everything has been understood, when imagining reaching some self-aware state where you’ve mostly understood your own thinking (in its unfolding), when imagining reaching some self-aware state where you’ve mostly understood your own thinking (in its unfolding), or, in the case of this note, when imagining deconfusion/philosophy/thinking as approaching some sort of ultimate deconfused state. If we want to think of a mind being on a trajectory in some space, I’d instead suggest thinking of it as being on a trajectory of flight, running off to infinity in some weird jagged fashion in a space where new dimensions keep getting revealed (no, not even converging in projective space or whatever). Or (I think) better still, we could maybe imagine a “(tentacled?) blob of understanding” expanding into a space of infinitely high dimension (things should probably be discrete — you should probably imagine a lattice instead of continuous space), where a point being further in the interior of the blob in more directions corresponds to a thing being [more firmly]/[less confusedly] understood (perhaps because of having been more firmly put in its proper context) — given reasonable assumptions, it will always remain the case that most points in the blob are close to the boundary of the blob in many directions (a related fact: a unit ball in high dimension has most of its volume near its surface) so “the blob” will always remain mostly confused, even though any particular point will eventually be more and more securely in the interior of the blob so any particular thing will eventually be less confusedly grasped. To be clear: the present footnote is mostly not intended as an argument in support of this view — I’m mostly just stating the question↩︎

  2. Also, I haven’t really decided if I want to be saying something about the importance of confusion relative to other stuff or if I want to be saying something about whether confusion will continue to play a very important role instead.↩︎

  3. That said, the project could totally succeed in other ways — for example, trying to address some issue with a naive construction of such a language, one could discover/[make explicit]/invent a novel thinking-structure.↩︎

  4. That said, assigning probabilities to pretty clear statements is very much a sensible/substantive/useful/real thing — e.g., in the context of prediction markets.↩︎

  5. Though note that one could also look at arbitrage as an example of this, and there’s a case to be made for opening up a new arbitrage route increasing some sort of order/coherence despite putting some structures in a new relation.↩︎

  6. This is related to it being good to “train” the thinking-system in part “end-to-end”.↩︎

  7. I don’t know if I should be fixing a target and then either asking each to do its work or looking for examples of evolution having done that and asking humans to do it (in theses cases, evolution might come up with a thing that also does \(100\) other things), or painting the target around some stuff evolution has made and asking humans to make something similar.↩︎