dc.description.abstract | In traditional neural networks, we have fixed weights and biases that determine how input is transformed into an output. In a Bayesian Neural Network (BNN), all weights and biases have a probability distribution attached to them. To classify an image, we do multiple runs (forward passes) of the network, each time with a new set of sampled weights and biases. Instead of a single set of output values, we get multiple sets, one for each of the several runs. The set of output values represents a probability distribution on it. In recent literature, a new algorithm was introduced for learning a probability distribution on the weights and biases of a neural network, called Bayes by Backprop. In this work, the accuracies of conventional neural networks (NN) and BNN on MNIST classification are studied. Evaluation of implementation shows that BNN gives better results than using conventional NN. Additionally, in this work, Bayes by Backprop is modified for Autoencoders and implemented Bayesian AutoEncoder (BAE) by changing configuration and loss function of the existing algorithm. | |