Today I'm going to give a small introduction for a simple Machine Learning Algorithm called Radial Basis Function. Actually I was learnt this method in my machine learning class. You can get a good understand by referring my assignment.
Neural Networks offer a powerful framework for representing nonlinear mappings from several inputs to one or more outputs. An important application of neural networks is regression. Instead of mapping the inputs into a discrete class label, the neural network maps the input variables into continuous values.
Neural Networks offer a powerful framework for representing nonlinear mappings from several inputs to one or more outputs. An important application of neural networks is regression. Instead of mapping the inputs into a discrete class label, the neural network maps the input variables into continuous values.
A major class of
neural networks is the radial basis function (RBF) neural network. We
will look at the architecture of RBF neural networks, followed by its
applications in both regression and classification.
Main Features of RBF
They
are two-layer feed-forward networks.
The
hidden nodes implement a set of radial basis functions (e.g. Gaussian
functions).
The
output nodes implement linear summation functions as in an MLP.
The
network training is divided into two stages: first the weights from
the input to hidden layer are determined, and then the weights from
the hidden to output layer.
The
training/learning is very fast.
The networks are very good at interpolation
Please see the following links
My Assignment submission
My Radial Basis Function Project
No comments:
Post a Comment