I am *very* excited about the potential for augmented collective intelligence as @RoamResearch begins to roll out multiplayer and API. I think balancing context and privacy is going to be one of the most important challenges to solve to realize this potential. A
on why/how 1/N

There is an immense amount of value in personal knowledge graphs, whether they are represented in a latent way as a linear notebook, or (as is increasingly the case) as personal wikis or [[networked notebooks]]
Personal knowledge graphs used to be a niche domain of Hackers, but, with the rise of tools like @RoamResearch and @TiddlyWiki , is slowly but surely making its way through the rest of the knowledge worker userbases. Some details on this here: https://github.com/sig-cm/JCDL-2020/blob/master/JCDL_Where_the_rubber_meets_the_road_2020-6-28-FINAL.pdf
As adoption of these resources grows, we are approaching critical mass of knowledge that could yield significant value if linked and/or made shareable in some way.
This means the vision of augmenting collective intelligence through approaches like @WardCunningham's Federated Wikis (vs. a universal Wikipedia) is becoming less of a pipe dream, and more of a thing we might want to figure out how to actually do.
There is a critical problem here around the tension between [[privacy]] and [[context]] though, which we need to solve if we want bridge private vs. public knowledge
Personal knowledge graphs gain their power by being open and densely connected, without regard for ordinary boundaries between work and not-work and between projects.
The power of networked thinking comes from breaking down artificial silos between knowledge, to power [[analogy]] and [[Conceptual Combination]], ultimately enabling a more sophisticated level of knowledge [[synthesis]].
For example, often, important [[context]] for ideas comes from the knowledge worker's personal assessment of the credibility of ideas.
These assessments can be quite personal and preliminary, such as downweighting a claim from a lab with a known (but not public) history of p-hacking or sensationalism until it is replicated by other labs.
For example, Darwin's wrestling with evidentiary gaps in his theory was intertwined with musings about reluctance to "go public" with his theory, given the hostile atmosphere for scientific explanations that left out a role for the Divine. More details in https://www.amazon.com/Darwin-Man-Psychological-Scientific-Creativity/dp/0226310078
As another example, today, many users of [[networked notebooks]] like @RoamResearch freely mix the personal and the professional, and cross mention people, resources, and data across projects.
This blurring between private and public, this-project and that-project, and personal and professional, is incredibly generative for creative work. But it also poses a serious challenge to making these graphs shareable, whether directly, or through some kind of [[federation]].
So there's a challenge for @RoamResearch as it moves to multiplayer: How do we protect unauthorized or unwanted access to ideas and thoughts that might lead to harm for the knowledge sharer?
But there is a related challenge here that makes this problem harder: Knowledge must be recontextualized to be usefully reused. We know this from decades of CSCW research. See, e.g., https://socialworldsresearch.org/sites/default/files/SharingKE_pre-press.pdf
Without understanding why an idea was written, how it relates to other ideas, what its precise meaning is in the [[context]] of the knowledge sharer, how that idea is grounded in observations, details, and evidence, the "mileage" of that piece of information is extremely limited.
The implication of this is that sharing only entry points into a knowledge graph (without their connections) will likely severely blunt their usefulness for others.
So the challenge for @RoamResearch and #roamcult is really: How do we provide enough [[context]] in entry points from a knowledge graph to benefit the knowledge reuser, while protecting the [[privacy]] of the knowledge sharer?
One interesting and difficult formulation of this challenge is how to design a principled approach to predicting whether "background" or [[context]] nodes (that link to a given target node in a graph) should be visible to a prospective reuser, and if so, to what extent.
For example, would a "privacy" flag for these nodes be workable? How would that flag even be conceptualized? Or computed?
It seem almost obvious to me that privacy flags will need to be [[context]]ual in some way: what's inappropriate for a labmate to see from a PhD student's notes is different from what a PI should see, and different entirely from what another unknown researcher should see.
But of course, as noted here, privacy is not the only consideration here: we need to balance this against the likelihood that a node is "needed" to usefully re[[context]]ualize the target node. What might a model for that look like?
A vexing issue here is that, as the usefulness of a personal knowledge graph grows (in proportion to its density, interconnectedness, and boundary-blurring), the feasibility of manually flagging things as useful and/or private approaches zero.
I think these are hard and interesting open research problems that we now have a unique opportunity to make progress on. If you have thoughts on answers and/or are interested in grappling with them with me as researchers / participants, I'd love to talk to you! cc @Conaw /END