
Member-only story
Math for Experimental AIs and Neural Networks
We all have our preferred methods of understanding complex concepts. For me, illustrations and plain language have always been my go-to tools. Take, for instance, this illustrated explainer on Neural Networks that I penned some years ago :
The world of AI research, however, is predominantly ruled by academic papers and mathematical formalization. My interest lies in experimental AIs rooted in neuroscience, but don’t worry, there’s significant overlap with established and basic AI principles. My focus here is to guide you (and myself) from idea, concept, or intuition to a mathematical representation. I’ll also provide some code to help start implementing and translating these ideas into action, a third step that is equally important.
Where to start ?
From a bird’s-eye view, the brain is composed of semi-discrete computing units known as neurons (approximately 86 billion) and the myriad connections between them (in the hundreds of trillions). Describing this intricate network which forms the foundation of neural computation and everything we do makes a good starting point.
Groups of neurons can be represented in a few ways :

Vectors are good for simple groups of neurons, quick experiments and some one dimensional computations, but most likely you want a field of neurons which can better be represented by a matrix.
# Vector:
num_neurons = 10
neuron_list = [0] * num_neurons
print("Neuron list:", neuron_list)
# Neuron list: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
# Matrix:
import numpy as np
num_rows = 4
num_cols = 4
neuron_matrix = np.zeros((num_rows, num_cols))
print(neuron_matrix)
# [[0. 0. 0. 0.]
# [0. 0. 0. 0.]
# [0. 0. 0. 0.]
# [0. 0. 0. 0.]]