What do we do about wicked problems? --That is, problems that we can't all even agree exist, much less define well; problems that have no metric for determining their extent, or even whether our interventions mitigate them? I don't have answers, but will venture to suggest a direction for us to look.
The internet has exposed a flaw in our grand plan to unite humanity: it turns out that increasing people's ability to exchange messages does not, by itself, increase their ability to communicate. The Net has developed a centripetal power: for every community it brings together, it seems to drive others apart. Eli Pariser's idea of the Filter Bubble is an expression of this phenomenon. This problem arises because it is easier to communicate with people who share the same understanding of the meaning of a given set of terms and phrases than with people who have a different understanding of these meanings. Automatic translation is not an answer to our diverging worldviews, because each person and social group has their own private grammar. It takes work to learn it and that work can't be offloaded to an automated system. At least, not entirely.
That's why processes that tackle complex group cognition usually exhibit an obsession with words. For instance, in Structured Dialogic Design (an example I use just because it's one I know well) most of the session's time is spent learning what people mean by the terms they're throwing around. This may seem boring and tedious, but it's absolutely essential--the unsexy plumbing work of the 21st century.
For instance, if we were to try to scale up SDD or a similar process, we might create a web-based system in which participants are allowed to define a problem or issue. The person who defines the issue owns it. Other participants can then participate in discussion and brainstorming around the issue, but in order to become part of the brainstorming group, they first have to submit a rephrasing of the issue. The owner of the issue decides whether their restatement indicates that they understood what he/she meant. If they have exhibited an understanding of how that issue is being framed for the purposes of this discussion, they can then proceed to help work on it.
Also critical is the question of who gets to work on a problem. To put it bluntly, the people who are affected by proposed changes need to have a say in them. The person who defined the issue doesn't get to say who the stakeholders are; a wider and more inclusive process does this--and political representation has a place here. If you don't have this inclusion of interested parties, you get the kind of botched social experiments that James C. Scott talks about in his book Seeing Like a State. (Think Soviet collectivization.) In systems terminology, you fail to properly employ Ashby's Law of Requisite Variety.
In my theoretical problem solving app, if the issue as defined can't attract representatives from enough of the identified stakeholder groups, then there's something wrong with the definition of the problem, and discussion on it cannot proceed.
We have other biases and limitations to work around. One is the Erroneous Priorities Effect, which arises when groups are allowed to vote on the relative importance of a set of issues. Straight democratic voting breaks down in this circumstance; you need to do a binary pairing exercise, where you ask, "would solving issue A help solve issue B?" and then "would solving issue B help solve issue A?" iteratively through the issues until you build an influence map that shows the true root(s) of the problematic mess. This is one process where computers can be of immense help; this is how the CogniScope software for SDD works.
So much for fantasizing about what I would do if I were king; these are just suggestions. I do think, though, that this stuff should be built into our social media at a very basic level. Why is it even possible to have misunderstandings online when we have all these tools at hand to help prevent them? It's because social media systems like Facebook are just the tricycle version of what social media will become. Facebook barely hints at what's coming; it's social media with training wheels on. What's coming is political media, media that extract commitments from their users and employ those commitments to help solve complex, otherwise intractable real-world problems. (Existing collective intelligence apps such as Wikipedia rely on community involvement, but as Aleco Christakis puts it, involvement is to commitment as ham is to eggs: the chicken is involved, the pig is committed.)
So what the skill-sets needed for this next great leap forward? I can tell you, it's not computer programmers that we'll need; it's not technologists. We need social scientists. A lot of them.
I wrote my great nanotech book back in the 90's. I've played the augmented reality and artificial intelligence cards in my novels. Biotech is yesterday's future. --No--what I'll be writing about from now on--what the 21st century will belong to--is cognitive and social science, because our technological society's one big blind spot is that we can imagine everything about ourselves and our world changing except how we make decisions. That is precisely the sea-change that is rushing toward us--or more properly, that we have the historic opportunity to seize and design. Our age belongs not to some attempted re-engineering of human nature, the sort of thing that so many millions died for in the last century. It belongs to a maturing of our ability to govern ourselves as we are.
Because it doesn't matter what we're capable of doing, if we continue making the wrong decisions about what to do.