"a bunch of other learning for the RH to do" - this is a good prompt. what can this learning be? https://twitter.com/Malcolm_Ocean/status/1246287812527259649
but first, what learning does LH do, again?
I think there are different things LH learns when in supportive mode (Kegan 3, 5) vs when in leading mode (Kegan 2, 4).
in leading mode, it figures out how to satisfy requirements imposed by [feelings].

at Kegan 2, "what I want" is not-LH and "how do I get it" is LH.

at Kegan 4, "what I want" is not-LH and "how do I build a consistent system / set of rules that allows getting it" is LH.
kids at Kegan 2 probably aren't bothered by having to be consistent. the *skill to play with sets of rules* is something LH learns later. perhaps it doesn't even have to optimize for consistency - the skill can be used in other ways.
the "Kegan seesaw" tweet, for context: https://twitter.com/TheOrangeAlt/status/1246166047054598146
if "Kegan 6 designs processes" is correct, then at Kegan 6 "what I want" is not-LH and "how do I design the world that the thing I want will occur more often" is LH.

design a seed, plant it, watch it grow. some chaos will result. it's okay.
designing the perfect ethics is a Kegan 4 thing - cf. [many philosophers]

figuring out what kind of AGI to build, *if you assume you won't be able to fix anything after it takes over*, is a Kegan 6 thing - cf. @ESYudkowsky
back to learning.

when LH is in supportive role, it learns self-restraint (?). Kegan 3: "oh, so apparently people might want different things than I want". Kegan 5: ???.
perhaps not restraint, but.. the limits of its own power. when I'm reading McGilchrist (who describes the limits of LH's power), I'm doing it from my leading edge (Kegan 5).
and then the seesaw happens. "alright, I have learned what the limits are, now step back - time for action".

(tongue-in-cheek?): first you meditate, and then you write a bestselling book on How To Meditate Properly, start a certification body, etc.
or: first you do a startup, then you go into meditation, then you do another startup but differently.

might be more granular then "one full Kegan stage". a surge of action after every small revelation you get.
from this point of view, a part of "enlightenment" is forever banishing your LH into the supporting role.

which might be fine?

this, now, is the point where I have shifted towards "enlightenment isn't so bad".
I think there might also be a completely different axis of development: the level of fusion / respect between the hemispheres.
the seesaw can go on forever. it doesn't have to.

first, RH might recognize that LH is a) better at optimizing/whatever and also b) *is having fun doing so*.
second, LH might recognize that it will never do something RH will be entirely happy about. neither a set of rules for action, nor a set of rules for picking between systems, nor a "seed" for the civilization, nothing.
and now I am back to disliking enlightenment again.

enlightenment doesn't seem to respect both hemispheres - it seems to kill off both hemispheres. it makes LH serve RH, and at the same time kills RH's ability to be discontent with what it sees.
synthesis: "LH is having fun *and* RH is having fun".

enlightenment: "neither LH *nor* RH are having fun".
and sure, enlightenment does solve the Seesaw Problem, but at an enormous cost.
my solution to the Seesaw Problem is different, and better. "Jesus, let them both do their thing".

LH is finding solutions. not because they are necessary, but just for the sake of it. RH is finding "things to want" (= problems), again, just for the sake of it.
they help each other.

but not because they are working towards a common goal. *they don't have a common goal* (unless you're willing to count survival/reproduction? maybe? dunno). they help each other because each can have more fun with the other's help.
oh, and circling back to @Malcolm_Ocean: he was right? having to pick *a* master is a dysfunction, and it leads to a pattern that wouldn't appear otherwise. I don't know how much LH is to blame for this. I also don't know if it's avoidable.
https://twitter.com/TheOrangeAlt/status/1246367901445210113
addressing this as well.

the reason LH/RH is not a dual-process theory is that LH and RH each rely on the other, heavily.

the reason LH/RH is not *not* a dual-process theory is that the nebulosity of the distinction doesn't mean there isn't one. https://twitter.com/Meaningness/status/1229920766436630528
am I lumping a lot of things into two categories? *sure* I am, there's no doubt about it.

anthropomorphizing the hemispheres and going "LH wants" or "RH says" is not (necessarily) the ground truth. it's a helpful trick that lets me notice things better.
looking at finer-grained distinctions will probably be very useful when I get better at looking at them
going back to "let them both do their thing", also see this. there's a part of you that likes self-hatred? there's a part of you that hates self-hatred? cool! they can co-exist. figure out a way for them to co-exist better. https://twitter.com/Fooljeff/status/1246361356401405952
You can follow @theorangealt.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: