# Computational Models of the Neuron

URL:: https://brilliant.org/courses/artificial-neural-networks/learning-and-the-brain-3/computational-models-of-the-neuron/1/
Author:: brilliant.org
## Highlights
> w⋅x+b. ([View Highlight](https://read.readwise.io/read/01gj3qrtdf1y7hq2bjq277fvt8))
- Note: A neuron can be defined by the inputs of OTHER neurons, how strong its connections to those neurons are, and a "bias" that determines when a neuron will fire.
> A biological interpretation is that the inputs defining x⃗\vec{x}x are the outputs of other neurons, the weights defining w⃗\vec{w}w are the strengths of the connections to those neurons, and the bias bbb impacts the threshold the computing neuron must surpass in order to fire. ([View Highlight](https://read.readwise.io/read/01gj3qpn7jsn4467e7s5nqdtr3))
> A biological interpretation is that the inputs defining x⃗\vec{x}x are the outputs of other neurons, the weights defining w⃗\vec{w}w are the strengths of the connections to those neurons, and the bias bbb impacts the threshold the computing neuron must surpass in order to fire. ([View Highlight](https://read.readwise.io/read/01gj3qtxf95zkj17qqbe87k687))
> The hypersurface w⃗⋅x⃗+b=0\vec{w} \cdot \vec{x} + b = 0w⋅x+b=0 is called the **decision boundary**, since it divides the input vector space into two parts based on whether the input would cause the neuron to fire. This model is known as a linear classifier because this boundary is based on a linear combination of the inputs. ([View Highlight](https://read.readwise.io/read/01gj3qyjvfyd18y2dg3w6daz74))
> Functions like the ones shown avoid counterintuitive jumps and can model continuous values (e.g. a probability):
>  ([View Highlight](https://read.readwise.io/read/01gj3r3x15dqpwz9ceky9nys1n))
> The power of ANNs is illustrated by the **universal approximation theorem**, which states that ANNs using activation functions like these can model **any** continuous function, given some general requirements about the size and layout of the ANN. ([View Highlight](https://read.readwise.io/read/01gj3r3n99qhxt2rxcbqb6d68r))
> No matter how complicated a situation is, a sufficiently large ANN with the appropriate parameters can model it ([View Highlight](https://read.readwise.io/read/01gj3r51t25zkcb7g088jxpbbc))
- Note: This would seem to be a huge advantage of [[Artificial intelligence]]s over human ones.
---
Title: Computational Models of the Neuron
Author: brilliant.org
Tags: readwise, articles
date: 2024-01-30
---
# Computational Models of the Neuron

URL:: https://brilliant.org/courses/artificial-neural-networks/learning-and-the-brain-3/computational-models-of-the-neuron/1/
Author:: brilliant.org
## AI-Generated Summary
A neuron has many inputs but only one output, so it must “integrate” its inputs into one output (a single number). Recall that the inputs to a neuron are generally outputs from other neurons.
## Highlights
> w⋅x+b. ([View Highlight](https://read.readwise.io/read/01gj3qrtdf1y7hq2bjq277fvt8))
Note: A neuron can be defined by the inputs of OTHER neurons, how strong its connections to those neurons are, and a "bias" that determines when a neuron will fire.
> A biological interpretation is that the inputs defining x⃗\vec{x}x are the outputs of other neurons, the weights defining w⃗\vec{w}w are the strengths of the connections to those neurons, and the bias bbb impacts the threshold the computing neuron must surpass in order to fire. ([View Highlight](https://read.readwise.io/read/01gj3qpn7jsn4467e7s5nqdtr3))
> A biological interpretation is that the inputs defining x⃗\vec{x}x are the outputs of other neurons, the weights defining w⃗\vec{w}w are the strengths of the connections to those neurons, and the bias bbb impacts the threshold the computing neuron must surpass in order to fire. ([View Highlight](https://read.readwise.io/read/01gj3qtxf95zkj17qqbe87k687))
> The hypersurface w⃗⋅x⃗+b=0\vec{w} \cdot \vec{x} + b = 0w⋅x+b=0 is called the **decision boundary**, since it divides the input vector space into two parts based on whether the input would cause the neuron to fire. This model is known as a linear classifier because this boundary is based on a linear combination of the inputs. ([View Highlight](https://read.readwise.io/read/01gj3qyjvfyd18y2dg3w6daz74))
> Functions like the ones shown avoid counterintuitive jumps and can model continuous values (e.g. a probability):
>  ([View Highlight](https://read.readwise.io/read/01gj3r3x15dqpwz9ceky9nys1n))
> The power of ANNs is illustrated by the **universal approximation theorem**, which states that ANNs using activation functions like these can model **any** continuous function, given some general requirements about the size and layout of the ANN. ([View Highlight](https://read.readwise.io/read/01gj3r3n99qhxt2rxcbqb6d68r))
> No matter how complicated a situation is, a sufficiently large ANN with the appropriate parameters can model it ([View Highlight](https://read.readwise.io/read/01gj3r51t25zkcb7g088jxpbbc))
Note: This would seem to be a huge advantage of [[Artificial intelligence]]s over human ones.