A long time ago, people wanted to understand how the mind works. They laid the foundation of logic. The idea was to define a system of rules. If we follow them, we’ll get the truth.
One problem was, and still is, the starting condition. Consider Dialetheism. When does a predicate become usable enough to start with? And, is the predicate “I am false” true or false?
Charles Babbage and Ada Lovelace aimed for a simpler target. If we consider logic itself as a tool, we might say that the two approached the usage of truth-finding tools differently, that is mind imitation. They wanted to create machines that take over basic computations.
Humans follow specific algorithms, with simple logical instructions, simple predicates and simple rules, to compute values of polynomial functions. The difference engine automates them. The machine is based on mechanical engines. It works through manual labor.
People then started to think of machines that not only follow mechanically-encoded processes but also interpret descriptions of which processes to follow. In the same way that Babbage and Ada’s machine gets equations, rules, and predicates, and computes a solution, those machines get predicates and rules, follow them, and find a result. The question was, “What kind of rules and predicates can we provide?”, “Can we build a machine to give us the outcome of any formal system we can think of?”.
The answer to the second question is “No”. Alan Turing answered this. We can build machines to interpret a formal system. But, we’ll never be sure whether it will give us something back. We, humans, should make sure the specification is deterministic. Turing specified a model, a way of encoding and interpreting a formal system. If a machine reifies this model, it will execute any list of logical instructions.
When the transistors appeared and the gap between logic and electronics collapsed, Von Neumann and his contemporaries built approximates of what Turing imagined. These machines stop at a certain moment if the rules we provide do not get a result. They have limited capacity and can’t run to infinity.
Computers today are the evolution of those attempts. A series of inventions led to bitmapped screens, keyboards, mouses, and hard memories. The programmer now specifies a formal system. It stores a part of it inside the hard memories. It enters the other part using the keyboard and the mouse. The computer collects the parts, applies rules to predicates, and infers a bitmap. The screen prints it. Depending on the input, the rules lead to different bitmaps.
Visual output pushed people into thinking differently. Douglas Engelbart and his contemporaries thought of the computer as an extension and a magnifier for the human mind. People at Parc a few years later started giving meanings to the bitmaps, hints as to what’s bigger than zeros and ones. They designed software that prints bitmaps with concepts close to how the human mind models and understands the world. Those concepts can be changed with the same operations the human mind employs to change its understanding and its models.
They wanted computers to reflect what’s in people’s minds. Those simulations (as H. Simon calls them) allow them to visualize, reflect, enrich, and improve their thoughts. They also offered flexible operations to imagine innovative ways of manipulating the concepts.
Metaphors like “desktop”, “window”, and “scroll” appeared there. We can put things (applications) on the “top” of the “desktop”, but not under. We can scroll in one of the four directions, but not in two directions at once. We can have one application per window, but no more.
Apple, during its first years, recruited Parc’s researchers and also thought about computers as augmentation devices. Steve Jobs said that computers are bicycles for the mind. Their effect on improving human decisions will be similar to the improvement wheels had on transportation.
The era of the three models started: what the user gets from the screen, what the programmer communicates through the names and the design of the code, and what the machine gets as binary programs.
In the 60s, the motivation for replacing the human mind surfaced. Early attempts toward artificial intelligence appeared. Today, they still get the most attention with the evolution of machine learning and neural network tools.
In most of these evolutions, software was thought of inside a context broader than accomplishing a task for the user. People with different motivations improved the state of the art. Some tried to formalize and automate reasoning. Others tried to outsource repetitive tasks. Others thought about computers as mind amplifiers and tried to invent new worlds that extend physical reality. That said, today when you think of building or maintaining software. What do you have in mind?