As most of us here can tell, there is a huge problem with the internet today.
The inability of quite capable, logical people to discern truth from fiction is really astounding to me.
This seems to be a crisis that was foreseeable. As we outsource more and more of our thinking, we are giving up on our own discernment skills day by day. In the face of increasing complexity, it seems this is a disastrous path and we’ve gone quite a long way down it already.
Personally I notice this trend very profoundly in Reddit threads where often folks will go to a subreddit of their choice, not only to be informed what is news, but also what to think of it. Estimates are that only about 1% of users actually participate in commenting.
This problem has just been exacerbated in the pandemic and it’s pretty clear that there is blame on many sides. Folks want simple answers in a complicated time and it is leading to lots of unsavory actors to prey on this basic human desire.
Over the past few years I’ve been fixated on the possibility of a decentralized collective intelligence. In my own version of the model it would begin as a commenting and authenticity scoring tool. The emphasis would be on genuine search for the thesis and antithesis on varying positions. Authors would be encouraged to take both positions and correct for any obvious issues in the arguments, thereby helping both sides in refining the best and most coherent argument.
I named this tool “Argumend”. The goal was to utilize an incentive structure that rewarded cooperation and increased value for the community. Once we have the honest arguments on both sides on the table we come to a consensus on each side’s position, even if we do not have any agreement.
From there it would be much easier to cooperate on any type of synthesis that could potentially get us to solving many of the large disagreement that we face in society.
There’s much more to discuss on this but over the past couple of days I’ve been enmeshed in a series of talks that I’ve found quite invaluable on this topic. Quite a few people have been thinking in this direction so I wanted to highlight these for anyone else that may be interested.
The video below was the first that I was turned onto by a friend. Here Daniel Schmachtenberger gives a very lucid interpretation of the problems that we are facing. Personally I am more focused on the issues in the information ecology and also think narrowing the scope of any possible solution to that domain would be able to bootstrap any other civilization wide redesign. But it was interesting to see these problems discussed in the broader perspective.
In the video below, Jordan Hall discusses a similar set of ideas but with a different perspective. The challenge is described as the inability of a blue church of broadcasters to process and assimilate information effectively in the face of an ever complex world. Jordan describes the emergence of a new distributed collective intelligence and how individuals need to start preparing themselves for the future that this type of society requires.
There are a number of other relevant videos on the Rebel Wisdom channel. The comment section is quite active and one particular comment lead me to an interesting and eclectic community which I need to investigate further: https://www.lesswrong.com/
Personally I believe that if we can prove a model of collective intelligence that solves a focused problem in a narrow scope, it will help to push this evolution much more rapidly.