# Neural network

state equations. The neural states change by some differential (or difference) equation, say $LaTeX: \textstyle x' = F(x,w,t).$ Typically (but not necessarily), $LaTeX: -F$ is the gradient of an energy function (in keeping with the biological metaphor), say $LaTeX: \textstyle F(x,w,t) = -\nabla x[E(x,w,t)],$ so that $LaTeX: x(t)$ follows a path of steepest descent towards a minimum energy state.
learning mechanism. This could be equations to change the weights: $LaTeX: \textstyle w' = L(x,w,t).$ Various learning mechanisms are represented this way, including a form of supervised learning that uses a training set to provide feedback on errors. Other elements can be learned besides the arc weights, including the topology of the network.