6 thinking (ever better) will continue

  1. Could history naturally consist of a period of thinking, of figuring stuff out — for example, of a careful long reflection lasting \(10^5\) years, during which e.g. ethics largely “gets solved” — followed by a period of doing/implementing/enjoying stuff — maybe of tiling the universe with certain kinds of structures, or of luxury consumerism?
  2. Could history naturally consist of a period of fooming — that is, becoming smarter, self-reprogramming, finding and employing new thought-structures — followed by a period of doing/implementing/enjoying stuff — maybe of tiling the universe with certain kinds of structures, or of luxury consumerism?
  3. a mostly-aside: These two conceptions of history are arguably sorta the same, because figuring a lot of stuff out (decently quickly) requires a lot of self-reprogramming, and doing a lot of self-reprogramming (decently quickly) requires figuring a lot of stuff out. And really, one probably should think of gaining new understanding and self-reprogramming-to-think-better as the same thing to a decent approximation. I’ve included these as separate conceptions of history, because it’s not immediately obvious that the two are the same, and in particular because one often conceives of a long reflection as somehow not involving very much self-reprogramming, and also because the point I want to make about these conceptions can stand without having to first establish that these are the same.
  4. It’d be profoundly weird for history to look like either of these, for (at least) the following reasons:
    1. There’s probably no end to thinking/fooming.1 There will probably always be major interesting problems to be solved, including practical problems — for one, because “how should one think?” is an infinite problem, as is building useful technologies more generally. Math certainly doesn’t seem to be on a trajectory toward running out of interesting problems. There is no end to fooming, because one can always think much better.
    2. The doing/implementing/enjoying is probably largely not outside and after the thinking/fooming; these are probably largely the same thing. Thinking/fooming are kinds of doing, and most of what most advanced minds are up to and enjoy is thinking/fooming. In particular, one cares about furthering various intellectual projects, about becoming more skilled in various ways, which are largely fooming/working/thinking-type activities, not just enjoyment/tiling/enjoying-type activites.
  5. This is not to say that it’d be impossible for thinking or fooming to stop. For instance, an asteroid could maybe kill all humans or even all vertebrates, and there could be too little time left before Earth becomes inhospitable [for serious thought to emerge again on Earth after that]. Or we could maybe imagine a gray goo scenario, with stupid self-replicating nanobots eating Earth and going on to eat many more planets in the galaxy.2 So, my claim is not that thinking and fooming will necessarily last forever, but that the natural trajectory of a mind does not consist of an era of thinking/fooming followed by some sort of thoughtless era of doing/implementing/enjoying.
  6. So, superintelligence is not some definite thing. If I had to compare the extent to which superintelligence is some definite thing to the extent to which general intelligence is some definite thing, I think I’d say that superintelligence is even less of a definite thing than general intelligence. There’s probably not going to be a time after superintelligence develops, like, such that intelligence has now stopped developing. Similarly/equivalently, there’s no such thing as a remotely-finished-product-[math ASI].
  7. All this said, it seems plausible that there’d be a burst of growth followed by a long era of slower growth (and maybe eventually decline) on measures like negentropy/energy use or the number of computational operations (per unit of time)3 (though note also that the universe has turned out to be larger than one might have thought many times in the past and will plausibly turn out to be a lot larger again). It doesn’t seem far-fetched that something a bit like this would also happen for intelligence, I guess.
    1. I should try to think through some sort of more careful economics-style analysis of the future of thinking, fooming, doing, implementing, enjoying. Like, forgetting for this sentence that these are not straightforwardly distinct things, if we were to force a fixed ratio of thinking/fooming to doing/implementing/enjoying, what should we expect the marginal “costs”/“benefits” to look like, and a shift in what direction away from the fixed ratio (and how big a shift) would that suggest?
    2. That said, even if this type of economic argument were to turn out to support thinking/fooming eventually slowing down relative to implementing/enjoying, I might still think that the right intuition to have is that there’s this infinite potential for thinking better anyway, but idk. And I’d still probably think we’re probably not anywhere close (in “subjective time”) to the end of thought/fooming.4

onward to Note 7!


  1. I mean like, up to the heat death of the universe (if that ends up holding up) or maybe some other limit like that. What I really mean is that there isn’t a time of thinking/fooming followed by a time of doing/implementing/enjoying.↩︎

  2. I wonder if it’d be possible for the relative role of philosophy/[conceptual refactoring]/thought in thinking better to be reduced (compared to e.g. the role of computational resources) in the future (either indefinitely, or for some meaningful period of time). For example, maybe we could imagine a venture-capitalist-culture-spawned entity brazenly shipping a product that wipes them out, followed by that thing brazenly shipping an even mightier product that wipes it out, and so on many times in succession, always moving faster still and in a philosophically unserious fashion and breaking still more things? That said, we could also imagine reasons for the relative role of serious thought to go up in the world — e.g., maybe that’d be good/rational and that’s something the weltgeist would realize more when more intelligent, or maybe ideas becoming even easier to distribute is going to continue increasing the relative value of ideas, or maybe better mechanisms capturing the value provided by philosophical ideas are going to come into use, or maybe a singleton could emerge and have an easier time with coordination issues preventing the world from being thought-guided. Anyway, even if there were a tendency in the direction of the relative role of philosophy being reduced in the world, there’s probably no tendency for philosophy to be on its way out of the world. (I mean “philosophy” here in some sense that is not that specific to humans — I think philosophy-the-human-thing might indeed be lost soon, sadly (because humans will probably get wiped out by AI, sadly).)↩︎

  3. or potentially some future more principled measures of this flavor↩︎

  4. Like, maybe I’d say that the end of thinking/fooming is still further away than \(10^{10}\) years of thinking “at the \(2024\) rate”.↩︎