Reasoning
What’s reason, can AIs have it ?
Reason is one of those concepts we all seem to agree on, I’ve casually tossed my fair share of “please be reasonable”
s, but when pressed for details and definitions we resort to the good old you’ll know it when you see it or some other made up definition on the spot, lest you think of me as unreasonable this is all mostly good and fine but if we are to endow AIs with reason we need to look deeper, so whats reason anyways ?
Would you believe me if I told you that we do not have an agreed upon and clear definition of reason !? What we have instead are approximations:
“Reason is the cognitive capacity to think logically, solve problems, make decisions, and form judgments based on available information, often involving abstract concepts and critical analysis.” ( cobbled from LLMs )
“Reason is the capacity of applying logic consciously by drawing conclusions from new or existing information, with the aim of seeking the truth. “ ( from the wiki )
Unfortunately most definitions are vague in the details and anchor themselves with auxiliary concepts like logic, meaning, consciousness.
Maybe it is that reason is relative to those defining it and thus incomplete, but let’s keep things practical and try to use reason to understand reason, one way is to list specific reasoning types and getting to a concept from them…
Types of reasoning
Deductive reasoning which draws specific conclusions from general premises is a good place to start as it is accessible, here’s an example:
We can start clarifying some of the auxiliary concepts, for instance logic here is the application of a rule or premise and meaning is the associations between the independent elements ( side length, number of sides, specific word we use aka square, visual signature/shape ), once meaning is reached and logic applied we can use deductive reasoning to correctly identify a square out of a lineup of shapes.
Logic is a key part of reasoning but what is it ? If you look at the
definition : "The study of correct reasoning" we once more end up with a
circular reference !
In simple terms we've found universal rules (based on patterns and
relationships) in nature that just happen to be so, if we apply them
to an argument we end up with valid statements, and so we keep on using
them,the whole process goes by logic.
Perhaps the simplest logical statement is A=A (or law of identity), which
states that everything is identical to itself, a rule so aparent that
thinking about an alternative universe where it is not true is hard.
And so Logic represents the bedrock rules of our universe, yet like
reasoning seems both incomplete, relative and evolving as a concept.
Inductive reasoning on the other hand draws general conclusions from specific observations ( a favorite of scientists ! ) and can be considered the inverse of deductive reasoning :
A lot of premises can be found via inductive reasoning and then used
in deductive reasoning, more the reason to get inductive reasoning right
least we incur in circular thinking !
Reason then can take a variety of forms, here’s a probably incomplete list of other common reasoning types:
Abductive: Inferring the most likely explanation from an observation.
Analogical: Drawing parallels between similar situations to reach conclusions.
Causal: Identifying cause-and-effect realtionships.
Counterfactual: Imagining alternative scenarios to current reality.
Probabilistic: Reasoning based on the likelihood of events.
Spatial: Understanding and manipulating objects in space.
Mathematical: Using numerical and logical principles to solve problems.
Emotional: Using feelings and intuition to guide decisions.
In practice, on the spot.
We humans are distinct from other animals ( although perhaps not unique ) in the variety of reasoning strategies we employ, it’s not uncommon to have our own private ways of dealing with problems based on our particular experiences, or even come up with new reasoning types, mix and matching, skipping steps, working backwards, drawing diagrams, experimenting, looking at examples, or seeing what happens if you change the rules of the system are just a few ways of the less common reasoning types that usually get grouped under creativity.
Current AI Reasoning
So what about AIs ? I’ll focus on the current top dog AI type/architecture, the LLM/Deep Learning, as it has surpassed previous ones in both generality, practicality and well they seem to reason or do they ?
Let’s say we ask an LLM : “If a train travels 120 miles in 2 hours, what is its average speed?” and get the correct answer of 60 miles per hour. A human reasoning its way to the answer would use various reasoning types like the ones we’ve discussed: deductive, analytical, mathematical, quantitive, procedural, etc,etc. An LLM not so much, as it is in essence predicting the next set of words/tokens, so how come it sounds reasonable ?
In order to answer this we need to look under the hood, LLMS and other AIs are trained on vast amounts of data and validated against an equally great number of known correct pairs or datapoints, so in a way the reasoning was done beforehand by those providing the datasets content, but what about new unseen cases where we say double or halve the speed of the train ? Inference and generalization are the amazing byproducts of something called the latent space, which means that with sufficient training, data and guidance an LLM/AI can apply by virtue of its state/weights pre-reasoned types to new data.
It's important to reiterate that LLMs are not reasoning in the same way we
do, at least not now, and that we are just starting to grapple with the
philosophical meaning of this seemingly new type of reasoning,
which could also be considered a combination of previously discussed types
like probabilistic,abductive and mathematical.
So can they reason ?
Whether AIs can reason depends on your definition of reason, do you only consider it is reason if it goes through logical steps and employs a variety of reasoning types and subsystems ? Or do you ignore the process and focus on the results like one of my favorite quotes:
Perfect results count, not a perfect process. 1970’s Nike Manifesto
AIs in general have been categorized into hard/strong, having things like
free will, qualia, sentience and the required subsystems to produce
said cognitive behaviors like us humans, and soft/weak or human like when
they simply produce the results by any other means.
As a researcher on artificial consciousness/sentience I believe the
behavior/result is important, we can code a bot that appears sentient but
without the whole sentience apparatus its just a hollow fake and not
sentient (see the Chinese room ),with reason it is a different story as
the result can stand by itself and be sufficient proof of reasoning.
Best of both worlds
I’d like to finish by pointing to the near possible future and the obvious next steps in reasoning AIs which is simply a mix of current AI arquitectures with other existing reasoning types.
An LLM is good at knowing stuff and sounding coherent, but doing something useful with that knowledge is up to us, we could use that knowledge as part of our reasoning, and that’s how AIs might evolve, using knowledge bits as part of other reasoning types, even basic memory operations we use in day to day reasoning are not fully developed yet, things like LLM Agents are making inroads, more complex reasoning types requiring additional functions could also follow.
When predicting the future you always risk missing the mark completely,
but that's part of the job, LLMs for instance could be enough to generate
every reasoning type, including those we dont know off, and well, reason
could turn out to be something else and that could unravel the whole
thing.
Thanks for reading !