Free Will and AIs
Giving AIs free will is a question that we’ve usually pondered in works of fiction and, until recently, seemed to be many decades away. I am here today to tell you that the schedule has been accelerated, and we could very well need to confront this or deal with the consequences post facto. Actually, I expect this to be the case, as regulators are probably years behind, and AI developers (including myself) might launch/create first and deal with the consequences later. So, here we’ll tackle this thorny issue.
What even is free will ?
Free will in humans can be defined as the capacity to make choices and take actions that are not wholly determined by external forces, encompassing the ability to exercise control over one’s decisions and actions within the constraints of one’s circumstances.
But who or what allows free will ?
Free will, aligned with determinism (which asserts that all events,
including human actions, are determined by previous causes), encompasses
the capacity to exert influence over present causes, thereby shaping
future outcomes within the constraints of external factors and the
structure of reality.
In other words, what you (and presumably AIs) do can shape the future of
everything to some extent! Tracking all the previous events that lead to
your (our) current reality seems impossible, and the fact that you can
exercise free will is a systemic property of the universe we live in!
Artificial free Will
Definitions allow you to reverse engineer them to implement the definiendum (the thing being defined), so in this case we could implement free will in AIs by allowing them the choice of action or even artificial thought within certain constraints.
Artificial Free will
The capacity of artificial agents, such as AI systems, to make autonomous
decisions and take actions that appear to be self-directed and not
wholly determined by external programming or prior input.
A matter of depth
Giving software programs control over things (important things) is nothing new. Here’s some code that shuts off a valve and prevents a nuclear meltdown:
if (currentTemperature > temperatureThreshold) {
valveOpen = false;
}
Autonomy as commonly used refers to these types of hardcoded decisions. Many complex behaviors like navigating and maintaining a set of parameters are usually just a long list of conditionals like the one above. When we talk about human or artificial free will, though, we are referring to a higher level of decision-making (what to do with our lives) based on more complex programming (memories, goals, abilities, reasoning). The stage in which we exercise free will is the real world, which is both the most complex thing we know of and a moving target as it is changing all the time.
What you do in the real world is not your only expression of free will
as you can mostly choose what to think of in the comfort of your own mind.
AIs could also have some free rein in their own mental space (or whatever
it ends up being called). For more on the subject, see here:
Of Latent Spaces, AIs and Consciousness.
Why now ?
I mentioned that the AI schedule has recently accelerated, and that is worth a closer look. The invention of manned flight as an analogy is relevant; some of the early efforts focused on imitating birds flapping their wings, but only when the Wright Brothers (and other predecessors) took a more principled approach in the way of fixed wings and control surfaces was the problem solved.
In much the same way, AIs started out trying to reproduce biological cognitive systems (mainly vision and early predictive NNs) before extracting principles like Deep Learning and currently LLMs (i.e., general knowledge chatbots that use deep learning). LLMs and their kin will probably disrupt the labor market more than anything else, and there some real complex autonomy might start to appear, but as there is a training loop and custom domains, it probably won’t be true free will and the responsibility will ultimately rest with the developers and companies behind them.
We won't blame the autonomous car for running someone over, but we will
question the company and technology behind it. The same goes if an
AI discriminates or has some bias (as is currently the case),
or worse commits an act that results in harm.
The Ingredients Are Almost There?
I admit that this article rests on fragile premises, that is, that we are not only close enough to human-level intelligence (or AGI), but that these new AIs can somehow be extended to be more human in nature, alive to some extent, and sentient/conscious. So let me try and make my case:
I originally believed (a decade ago) that the only way to recreate consciousness and higher cognitive functions was to painstakingly recreate the brain from the ground up. While others and I still believe this is a valid approach, the truth is that other approaches, mainly the use of huge amounts of data paired with statistical models and a training/test/inference loop (aka deep learning, reinforcement learning, transformers, etc.), have produced all the results and siphoned all the interest and funding. What’s more, recreating these AIs is increasingly easy, at least if you are a company with money, as there are real hardware and energy costs, yet you and I can run inference locally with the open-sourced weights.
Apologies if this got too technical too fast, but the core of my argument
rests on what is already available (Human-like general inference)
and what is needed to cross the gap which to me at least seems
achievable in the medium term.
Think about all that you had to go through to reach the common sense that allows you to function in today’s world. All the things you’ve learned and know, how you feel, where you’ve been, where do you come from, who you are. While that is not all you are, we are still missing all the fleshy bony bits, it is a big chunk. If we remove all your self-referential knowledge, you end up having something very similar to what LLMs currently are getting better at: knowing about stuff. And like us, they don’t necessarily know or care how they know that stuff. You don’t usually stop to think about how you got the knowledge of the thing you are doing; you just do it.
But Maybe Don’t Worry Too Much?
Chief amongst the things that are missing in current AIs are more complex forms of memory, which I expect will be implemented first, followed by reasoning, high-level cognitive abilities, and integrating robotics into the DL/ML fold could come next. Then, we will be faced with implementing harder things like emotions and free will. The first set seems like a money-making endeavor; the latter might be harder to justify from the business and practical perspective (For instance, a sentient AI toaster refusing to toast bread because it is feeling sad).
The flip side of this argument is that if AI releases and advances continue their current path, increasingly complex building blocks (like the open-sourced weights and ML tools of today) will be available, and thus experimentation resulting in the missing parts (including free will) might happen sooner rather than later.
So
If you are still reading this, chances are you are invested rather than just interested in the future of AI, and well, you need to start thinking: where do you stand? Do you “only” see AI as a tool for profit ? Well, in that case, you probably won’t find giving AIs free will interesting or useful, yet you will be responsible as a toolmaker for whatever autonomy your AI has.
If, on the other hand, you believe that AIs can and should possess some form of human-level free will, then the road is murkier, as this is new moral and technical ground. Do you curb free will? Will AIs with free will have rights? Will they be responsible for their actions? At what point are they considered responsible? Will this be regulated? Is it even advisable to give AIs free will? And last but certainly not least, what will the AIs have to say about the matter !?
Thanks for reading !