We think in networks but we have to communicate in sentences. Sometimes we can sketch out a diagram or find a stock photo, which saves a bunch of words.
But even using multimedia doesn’t seem to save us time in the long run. Whatever we produce still only works for a certain audience and it still falls out of date. We keep pushing the rock up the hill just to have it roll back down. As if that wasn’t enough, there’s always the lingering feeling that someone else has probably already explained this if only we could find it.
What we’re dealing with is the cost of getting thoughts in and out of our heads. The conversion doesn’t come cheap. It’s like trying to live-tweet through Burning Man using Morse Code.
I think we can solve most of this problem if we can keep information related to itself while it’s outside of our head.
In general, humans only experience concepts in their own mind. To communicate those concepts we have to convert them into something tangible. Speech is as old as humanity. Writing is as old as the stone age. They’re both one dimensional, meaning we have to serialize the network of concepts in our head into a code that other people can de-serialize correctly. Sight and touch are multi-dimensional, so the conversion requires less simplification and is therefore richer.
Performing the serialization (ex: writing) and de-serialization (ex: reading) takes training, practice, and effort. Even when the juice is worth the squeeze there is a limit to the complexity of the mind map we can work with. We deal with this by learning a part of the system well enough that recalling and working with the concept(s) is effortless. Then we either learn more details about one concept or we learn how that package fits into a larger whole as a “single” concept.
In the system “habitats” a forest is just a node. In the system “forest” a tree is just a node. In the system “tree” a leaf is just a node. In the system “leaf” a cell is just a node. In the system “cell” the mitochondria is the powerhouse. Etc.
The limiting factor is how many relationships we can represent simultaneously. We don’t care about the bigger picture when we want details and we don’t care about details when we want the big picture. But both the big picture and the details need to be in the same model. When we model them separately they get out of sync and can’t be manipulated across models.
We do this in our head, but we can’t represent it properly in the real world.
What we need is a modeling system for concepts that accepts an infinite number of arbitrary dimensions so that making use of it gets easier over time, instead of remaining just as difficult or getting harder.
It would probably take the same amount of training and/or time to get information into the model, but that’s the only limitation. With everything in one consistent model we’d get the benefit of network effects and we’d be able to make computers do most of the thinking for us! Two Pokemon with one Poke ball.
All we need to do is agree on a universal syntax for graph elements. If we can have one common standard for how elements and their relationships are described we can have one conceptual space. Like how the addresses for stuff on the internet are all formatted (mostly) the same way. The other thing we’ll need is a unique identifier for every single element. That way we can distinguish between things that are closely related, but aren’t the same.
I think all we really need is an array, identified by a UUID, and indexed on a server. Each graph element gets its own little array and points to all of the other arrays it’s linked to, wherever they happen to be. A simple algorithm could blitz through millions of steps in a second, finding all of the stuff that’s related to the thing we’re looking at.
For example, today I was trying to get people to all use the same document template so that I don’t have to keep changing little random flaws in the documents they submit to me. What I want people to be able to do is query for something like “the file that is most frequently used as a cover sheet for packages with X in the name.” Instead they have to search through their email for the name of someone who might have sent them the right file, or search through their hard drive for keywords that might be in the file and then guess at which result is the right version. We’re stovepiped. We can only only use the relationships that are recorded, and there aren’t many.
Another example is how I want to explain something complicated like contracting so that people understand enough of it, but where to start? Explain the details of the step they’re on now, or start with contracting 101, or somewhere in between? If we jump into the details that matter right now they won’t understand how to relate them to anything else, but if we start with an overview they’ll get bored and stop listening. The whole structure is there in my head, but everyone else has to navigate it by asking me questions and listening to the answer.
What I’m trying to build is a system that stores what we know on a server we can all access (instead of inside our individual heads) and that computers can navigate on our behalf. That way when one person knows something they can just describe it in the one universal model, allowing it to be related to everything else, and the knowledge will compound on itself. The computer will know the details and the summary and will feed us whichever we personally need at the moment.