Sitemap

Intelligent Machinery

Revisiting Turings views on AIs

Keno Leon
7 min readNov 21, 2022

--

Ai generated and colorized Images of an older Alan Turing we sadly never got.

Alan Turing is widely considered one of the fathers of Computer Science and Artificial Intelligence yet probably better known for his contributions to decoding encrypted messages during world war II. Today we’ll take a look at one of his papers ( Intelligent Machinery ) to see how things have aged since those days ( 3/4 of a century has passed ! )…In short in some regards great and in others not so much.

-— — ⭐️ Support the AlanTurings of todaySUBSCRIBE TO MEDIUM ! ⭐️ — —-

⚠️ The original text can be found online if you wish to follow along.

Introduction

The paper concerns itself with machinery that might show intelligent behavior, we now call these machine AIs or simply computer programs…

Paraphrasing the thesis can be summed by the argument: "Human level intelligence can only be achieved if adequate training is provided."

We unfortunately need to be critical and so the main problem with this argument is that Turing is both right and wrong about learning and intelligence:

It is true ( or most likely ) you need learning for intelligence, but it is not encompassing of everything we currently understand constitutes intelligence ( never mind consciousness/human intelligence). It’s like saying car level locomotion can only be achieved through the use of tires or some other analog mechanism, sure, but wouldn’t you also need an engine, seats, an enclosure ?

Regardless of criticism, AI currently lives in a place where learning is everything and has proven to be quite useful, there’s still some ways to go for something like Artificial General Intelligence or human level intelligence though, and that’s why revisiting Turing is important as we stand where he once stood, maybe a bit higher but still facing a seemingly impossible task.

Darker times. Intelligent Machinery was written on or about 1948 and Turing was under heavy NDAs from his military work, he would die in 1954 without the paper being published in his lifetime, so understandably he starts by defending the study and creation of AIs, something that to us could seem unnecessary, but not in any way a fault of Mr. Turing as those were morally different times we now mostly think are wrong/quaint and some attitudes have changed.

Things get better as he starts describing an universal machine we now call computer and gets prophetic:

The engineering problem of producing various machines for various jobs is replaced by the office work of programming the universal machine to do these jobs.

We are currently going through a labor automation phase and an increasing tech workforce, described to a T by Turing but lacking the socioeconomic changes we worry about these days.

A bunch of definitions follow which serve to narrow down the problem of making intelligent machinery to controlling discrete machinery, which is what we normally associate with computers we can inspect and measure operationally and of course make and program for our needs.

It gets very technical very fast going through theoretical computer models, implementations, constraints, some electrical engineering, logic problems and a few other subjects which we have mostly expanded, surpassed or ignored, of note to me were some snippets:

"The memory capacity of a machine more than anything else determines the complexity of its possible behavior"

We now know this is not the case, how storage/memory is organized, the software running the machine and some other considerations like speed also matter, there’s also this one bit:

"We could produce fairly accurate electrical models ( of the nervous system )… but there seems very little point in doing so"

It’s ironic that the opposite proved to be the preferred way AI has advanced with Artificial Neural Networks that were originally inspired by emulating the nervous system, to be fair the proposed alternative, a purely logical/mathematical approach has also given a lot of fruit and it is the combination of all these approaches that has and probably will generate future advances.

"the electronic circuits ( vs biology ) have only one counter-attraction, that of speed."

Another bullseye ( at least for a few decades ), as our brains have a relative slow processing rate vs computers and speed can emulate the massive parallelization happening in the brain to a certain extent.

The birth of AI as we know it.

While Turing admits that copying the human brain is the sure way to make a thinking machine,I think he understood the limitations of his time, so he made himself a set of problems suitable for the tools he had, paradoxically the advent of computers has allowed for other ways to build an AI although we still seem to stick to his original problem space:

"Instead we propose to try and see what can de done with a 'brain' without a body...providing at most sight,speech and hearing"and the following problems for this machine to work on:

Games (Chess), Learning languages, translation of languages,Cryptography, Mathematics.

If you add image recognition you’ve basically summarized almost a century of AI and how the research/development has been conducted !

And machine learning.

I believe he was the first to understand/coin the training/predicting paradigm we use in machine learning: feed a Neural Network images of cats with targets and loss functions until they can accurately predict new cat images:

"begin with a machine with very little capacity...apply appropriate interference until it can produce definite reactions to certain commands"

If you squint you can even find early Attention/Segmentation (ie focusing and or dividing a lot of information into more manageable and relevant chunk).

It will be when the man is 'concentrating' eliminating 'distractions'he approximates a machine without interference.

Loss Functions, training and biology

Expanding on the above Turing tries to draw comparison with the brain/cortex and human development, there’s a lot to cover but the main takeaway is that without the Neuroscience insights we now have he’s left with more questions than answers but still manages to describe or invent a machine learning framework we still use today…

"The cortex of the infant is an unorganized/indeterminate machine, in the adult they have purposive effect"..."the cortex of the infant can be organized by suitable interfering training"

It seems obvious now, but we start life as an almost empty canvas we paint with experience while maintaining the ability to learn, this constitutes a universal machine of sorts and is not a bad way to setup a machine, only thing missing here are the biological details which weren’t available at the time, things like prunning, memory types, memory consolidation, cortex architecture, networks and cycles in the brain etc, etc. To cap this section he describes a way to train a machine towards a certain goal, very akin to the directed/supervised training and loss functions we use in ML.

"Training depends on a system of rewards and punishments...pleasure-pain systems will converge towards desired character."

While we don’t scold and give praise to AIs when they are right/wrong, we have loss functions the Ai needs to optimize in some way, we even use the term convergence when the value of the loss function reaches some target range.

Miscellaneous

While Turing didn’t invent thought experiments and whiteboarding, he did come up with ways to imagine the logic behind computer models which he called Paper Machines consisting of writing down operations and handing them to a human for execution.

He also used diagrams like the following to map the flow of information along with the various states a machine could have:

Contrast the similarity with graph theory and early NN architectures.

The paper is a preamble to another one (Computing Machinery and Intelligence, 1950) where these ideas are further explored and formalized and the Turing test which serves as a litmus test for AI Intelligence (recently surpassed ) is proposed, but in this previous one there is a progenitor..

"devise a paper machine that will play chess, get three men as subjects, two rooms"

The gist is that if you can’t tell the difference between a normal human playing chess and a human used as a paper machine (a stand-in for AIs) delivering chess plays between the two rooms you might conclude that the machine is intelligent.

The importance of this thought experiment at the end of the paper reflects on Turings notion that intelligence is relative to the observer, that is if we know how it works it is not intelligent, another contentious thought as we now are more inclined to take other elements into consideration (see strong vs week AIs for instance).

CODA

Revisiting Turing after seven decades of AI advances is an interesting experience due to the historical significance, but it also serves as a reminder of how to innovate and even invent the future even though the tools, knowledge and technology you might have in the present might seem insufficient.

As it regards AI there’s no doubt the man was a genius that we lost too soon and set the field in a fruitful path, yet we can also gain the idea that this path was neither perfect nor entirely correct for the ultimate goals of Ai.

Technically and in some details it might not have aged all that well, but the core ideas, mainly that training is important for AI and that Human level and Universal Intelligence can be at least approximated if not replicated have advanced technology and human progress.

And reading between the lines that it is one if not the ultimate achievements we can try to better ourselves.

Human (me) and AI working together to make a smart looking editorial portrait of Alan Turing I hope he’d approve !

Thanks for reading !

--

--

Keno Leon
Keno Leon

No responses yet