6 thinking (ever better)
will continue
- Could history naturally consist of a period of thinking, of figuring
stuff out — for example, of a careful long reflection lasting \(10^5\) years, during which e.g. ethics
largely “gets solved” — followed by a period of
doing/implementing/enjoying stuff — maybe of tiling the universe with
certain kinds of structures, or of luxury consumerism?
- Could history naturally consist of a period of fooming — that is,
becoming smarter, self-reprogramming, finding and employing new
thought-structures — followed by a period of doing/implementing/enjoying
stuff — maybe of tiling the universe with certain kinds of structures,
or of luxury consumerism?
- a mostly-aside: These two conceptions of history are arguably sorta
the same, because figuring a lot of stuff out (decently quickly)
requires a lot of self-reprogramming, and doing a lot of
self-reprogramming (decently quickly) requires figuring a lot of stuff
out. And really, one probably should think of gaining new understanding
and self-reprogramming-to-think-better as the same thing to a decent
approximation. I’ve included these as separate conceptions of history,
because it’s not immediately obvious that the two are the same, and in
particular because one often conceives of a long reflection as somehow
not involving very much self-reprogramming, and also because the point I
want to make about these conceptions can stand without having to first
establish that these are the same.
- It’d be profoundly weird for history to look like either of these,
for (at least) the following reasons:
- There’s probably no end to thinking/fooming.
There will probably always be major interesting problems to be solved,
including practical problems — for one, because “how should one think?”
is an infinite problem, as is building useful technologies more
generally. Math certainly doesn’t seem to be on a trajectory toward
running out of interesting problems. There is no end to fooming, because
one can always think much better.
- The doing/implementing/enjoying is probably largely not outside and
after the thinking/fooming; these are probably largely the same thing.
Thinking/fooming are kinds of doing, and most of what most advanced
minds are up to and enjoy is thinking/fooming. In particular, one cares
about furthering various intellectual projects, about becoming more
skilled in various ways, which are largely fooming/working/thinking-type
activities, not just enjoyment/tiling/enjoying-type activites.
- This is not to say that it’d be impossible for thinking or fooming
to stop. For instance, an asteroid could maybe kill all humans or even
all vertebrates, and there could be too little time left before Earth
becomes inhospitable [for serious thought to emerge again on Earth after
that]. Or we could maybe imagine a gray goo scenario, with stupid
self-replicating nanobots eating Earth and going on to eat many more
planets in the galaxy. So, my claim is not that thinking
and fooming will necessarily last forever, but that the natural
trajectory of a mind does not consist of an era of thinking/fooming
followed by some sort of thoughtless era of
doing/implementing/enjoying.
- So, superintelligence is not some definite thing. If I had to
compare the extent to which superintelligence is some definite thing to
the extent to which general intelligence is some definite thing, I think
I’d say that superintelligence is even less of a definite thing than
general intelligence. There’s probably not going to be a time
after superintelligence develops, like, such that
intelligence has now stopped developing. Similarly/equivalently, there’s
no such thing as a remotely-finished-product-[math ASI].
- All this said, it seems plausible that there’d be a burst of growth
followed by a long era of slower growth (and maybe eventually decline)
on measures like negentropy/energy use or the number of computational
operations (per unit of time) (though note also that
the universe has turned out to be larger than one might have thought
many times in the past and will plausibly turn out to be a lot larger
again). It doesn’t seem far-fetched that something a bit like this would
also happen for intelligence, I guess.
- I should try to think through some sort of more careful
economics-style analysis of the future of thinking, fooming, doing,
implementing, enjoying. Like, forgetting for this sentence that these
are not straightforwardly distinct things, if we were to force a fixed
ratio of thinking/fooming to doing/implementing/enjoying, what should we
expect the marginal “costs”/“benefits” to look like, and a shift in what
direction away from the fixed ratio (and how big a shift) would that
suggest?
- That said, even if this type of economic argument were to turn out
to support thinking/fooming eventually slowing down relative to
implementing/enjoying, I might still think that the right intuition to
have is that there’s this infinite potential for thinking better anyway,
but idk. And I’d still probably think we’re probably not anywhere close
(in “subjective time”) to the end of thought/fooming.
onward to Note 7!