About Daniel Rios

Daniel Rios is a en electronics and software engineer from the University Autonomous of Guadalajara. He has worked in many projects involving full development of Artificial Intelligence systems.

Studying a Degree in Computer Science and AI

Bachelor in computer science is the career is where you can study and learn everything related to modern computation. Most computer science study programs include “Artificial Intelligence” as part of the career. Well, if you are planning to study this career, and you like everything about artificial intelligence. Let me tell you something: it is not an easy cake to cook.

What you just need

Bachelor in computer science is a tough career. You need to be creative and have a natural affinity with math. Moreover, get ready for those long nights studying while you get ready for your exam, and forget about those wild parties because your time now belongs to your career.

Typical Requirements

Computer science carries all basic math courses such as algebra, calculus, differential equations, probability and statistics, discrete mathematics and more. Furthermore, it includes all programming related courses such as programming principles, algorithms, data structures, logic and computation, and architecture of computers.

Beyond basic studies you are able to choose some optional courses such as: Operating Systems, compilers, real-time computing, computer graphics, multimedia, algorithm design and analysis software testing and of course artificial intelligence.

I receive constant emails asking recommendations about choosing a career related to computer science. My best recommendation is to visit your local schools and ask for information. However, if you’ve got what it takes, then go for it.

Another thing to keep in mind is that studying these careers can be expensive (I tell you by experience). So if you are intelligent and you get good notes on exams you get a chance to get a scholarship.

Here is a link that takes you to a page where you can apply to get a scholarship. This is only for US residents. You only have to register, fill the form adding your basic information: name, email address, and phone. And you may apply to get one.

Go Here and Register

Forecasting Software for Stock Market Prediction

Forecasting Software Neural Networks

One of the most remarkable properties of artificial neural networks is their capability of predicting patterns. Therefore, it is common to find applications designed to predict events before they happen. Most known applications of this kind are designed to predict stock market returns. This software is known as Forex or Forecasting Software.

In my experience, artificial neural networks are capable of predicting market changes accurately as long they were properly trained. So, if you are looking for a solution, stay here and check what I am about to say.

Predict your Stock Market Changes and Make Decisions Based On It

The perfect forex application would be capable to tell you when to buy stocks and when to sell them. Unfortunately, there isn’t such perfect application but you can get 99% of accuracy which can be near to perfect.

Is there such Perfect Application?

The answer is NO. However, I found something that gets really close to that approach. Here is something you may like and that isn’t that expensive.

Presenting you NeuroMaster

NeuroMaster is a forecasting application designed to predict the stock market changes. It practically tells you when to buy and when to sell. This package was programmed and designed by Konstantin Grek and Russian programmer with vast experience in the field.

Here is an intro video of the application.

NeuroAI Test On this Application

Before saying this package may suit your needs, we ran some tests on it. This is what I’ve got:

User Interface

The UI is very friendly, intuitive and easy to use. What you really need to know to understand, this package is about finance and stock market terminology. Therefore, this package is designed for traders and investors.

Functionality

NeuroMaster reports its findings by two different ways: Data tables and charts. We ran tests predicting data and so far I could tell it does it pretty well. It had some variations with the real results, but these variations were not negative.

Documentation

The documentation and “how to” modules are well designed and you can learn easily to use it.

Pricing

The price of this software package may seem high ($250 USD), but it is actually cheap compared to other packages. And the most important thing here: You’ll get satisfactory results. And if you don’t like it you can ask your money back without trouble.

Goto NeuroMaster Home Page Here

Using the Backpropagation Library in your Application

In this small short document I’ll show you how to quickly setup the backpropagation library for you application.

Visual Studio Users

First off, you have to get the library. If you don’t have it you can get it here.

  • Start a new project on your visual studio IDE.
  • If you are having the MLBP libraries in a different folder than your project, make sure to include the (.h) header file location to the folder in your Visual Studio Configuration. Make sure to include libs folder too.Visual Studio Include FolderLib Path Visual Studio
  • Defining Global Macros: If you want to use the double precision library make sure to include into the global defines, the macro DOUBLE_PRECISION. I also recommend including the macro SHARED_LIBRARY. This in case you get linker errors.Defines Global on visual studio
  • Adding libraries to project: In order Visual Studio may find which library it should link to, first, in your project properties -> VC++ Directories, add the directory where the .lib files are stored.Visual Studio Add Library 1Visual Studio Library Adding Directory
  • Add the macro #pragma(lib,”mlbp_stsfp.lib”)  or #pragma(lib,”mlbp_stdfp.lib”) if you enabled DOUBLE_PRECISION on any of your source files.

Qt Creator Users

  • Start a new project on Qt Creator.
  • On the PRO file make sure to include with INCLUDEPATH the folder where all header files location. Example: INCLUDEPATH+=c:/mlbp/include
  • Include the linking library using with LIBS. Example: LIBS+=c:/mlbp/libs/vc_x86/mlbp_stsfp.lib
  • Defining Global Macros: If you want to use the double precision library include this statement: DEFINES+=DOUBLE_PRECISION. If you are having linker errors include macro SHARED_LIBRARY.

Now you are ready to start coding your project.

Initializing Neural Network

1. First, you have to set the name space where the library is grouped: mlbp;

using namespace mlbp;

2. Declare a variable as a bp object: the neural network object.

bp net;

3. Initialize neural network to your needs with function bp::create()

if(!net.create(PATTERN_SIZE,OUTPUT_SIZE,INPUTNEURON_COUNT))
{
        cout << "Could not create network";
        return 0;
}

4-Incialize all neuron weights to random values with function bp::setStartValues()

net.setStartValues ();

5-If you have the multithreaded version of the library and you want to use it you can do it by calling function bp::setMultiThreaded(true);

Network ready for training

Now you have, the network is ready for training. Use function bp::train() or any variation of it to start training the network. The train function is used inside a loop and all training patterns must pass inside that loop. The idea of this loop is to wait until bp::train() returns an error near close to zero.

You can do it in this way.

while( error < 0.001f )
    {
        error=0;
        for(int i=0;i<patterncount;i++)
        {
            error+=net.train(desiredOutput[i],input[i],0.09f,0.1f);
        }
        error/=patterncount;
        cout << "ERROR:" << error << endl;
    }

Or you just can do it by setting a fixed number of iterations:

for(int i=0;i<45000;i++)
    {
        error=0;
        for(int i=0;i<patterncount;i++)
        {
            error+=net.train(desiredOutput[i],input[i 0.09f,0.1f);
        }
        error/=patterncount;
        cout << "ERROR:" << error << endl;
    }

I recommend using the second way because in the first one if by any reason the network never reaches that minimum value it would hang in an infinite loop.

After Training

Training is usually intensive when you do it with large patterns. And even more when you include many of these patterns. So you may want to save all you have done. There are two ways to do it.

Using Save Function

The easiest way is by using the bp::save() function.

if(!net.save("network.net",USERID1,USERID2))
    {
        if(net.getError()==BP_E_FILEWRITE_ERROR)
        {
            cout << "Could not open file for writing";
        }
        else if(net.getError()==BP_E_EMPTY)
        {
            cout << "Network is empty";
        }
    }

Getting a linear buffer of the network

The second way is easy too and useful if you want to save the network in your own file format. You only have to call function bp::getRawData() and pass it to a bpBuffer. Use function bpBuffer::get() to get the buffer as an unsigned char array. In this way you can save the buffer on a personalized file.

bpBuffer buff;
    buff=net.getRawData();
    if(!buff.isEmpty())
    {
        FILE *f;
        fopen("yourfile.dat","rb");
        fwrite(buff.get(),sizeof(char),buff.size(),f);
        fclose(f);
    }

To create a network from a buffer, just load a bpBuffer with the unsigned char array and use bp::setRawData(bpBuffer).

bpBuffer buff;
    unsigned char *ucbuffer;
    unsigned int usize;
   LOAD ucbuffer here
   .....
    buff.set(ucbuffer,usize);
    if(!net.setRawData(buff))
    {
        if(net.getError()==BP_E_CORRUPTED_DATA)
        {
            cout << "Invalid buffer";
        }
       //...
    }

Using for Production

If your application won’t perform training every time it is used by the final user, or if it just would do it once. You only have to change the steps of starting random values of the network and training to just loading the neural network data from a file, and call bp::run every time your application would need the services of the neural network.

Check an example program here.

Check Reference Documentation here

Multi-Layer Backpropagation Library v1.0

If by any chance you wanted to create any application that needs the power of a neural network, then I have a solution for you. I introduce you the Multi-Layer Feed-Forward Library using the back-propagation training algorithm.

This library for sure will make easier the implementation of a neural network in your application. For now, the library only supports feed-forward networks.

You probably know what a feed-forward network is, if not I’ll explain it briefly. A feed-forward neural network is a structure where the inputs are propagated and processed thought the neurons from the input layer to the output layer. This is far one of the simplest and useful neural networks around.

What you can do with this library

This library allows you to create, train and implement neural networks in your application, and everything by just writing a few lines of code.

Supported programming Languages

So far the library is supported for C++ and Windows, and it will available for Linux and mobile devices soon.

Features

  • Creation of and training neural networks of unlimited sizes and layers.
  • An easy way to store the neural network data via files with a customized ID so your application could only access it.
  • Easy to use data type structures for safely handling lists and floating point arrays
  • Multithreaded execution during training
  • Source code examples
  • Full documentation
  • Full support via email

Requirements

  • Visual C++ compiler or GNU C++
  • Microsoft Visual C++ 2008 runtimes for 32 bits version, Microsoft Visual C++ 201 runtime for 64 bit version or MinGW runtimes depending on which compiler you have.
  • Available on dynamic link library

Price

The Multi-Layer feed forward library is free.

You can use accepting this software is provided ‘as-is’, without any express or implied warranty. In no event will the authors be held liable for any damages arising from the use of this software.
You can use it on non commercial and commercial applications under the following restrictions:

  • You must not take owner ship or say that you wrote this library.
  • An acknowledgment in you application documentation is required along with a link to NeuroAI website URL:
  • You may not reverse engineer, decompile, or disassemble any of the binary files included in this package.
  • You may not modify any source files and libraries included in this package and republish them as yours.
  • Copyright and license notices on source files may not be removed or altered.

Download

UPDATE

Download Most Recent Version MLBP version 1.0.1a.
FIXED SOME BUGS!

Older version libraries here. Feel free to report any bug if you find one.

Documentation: Multi-Layer Backpropagation Library Reference

Examples: Simple Implementation Source code

Writing the Backpropagation Algorithm into C++ Source Code

Understanding a complex algorithm such as backpropagation can be confusing. You probably have browsed many pages just to find lots of confusing math formulas. Well unfortunately, that’s the way engineers and scientists designed these neural networks. However, there is always a way to port each formula to a program source code.

Porting the Backpropagation Neural Network to C++

In this short article, I am going to teach you how to port the backpropagation network to C++ source code. Please notice I am going to post only the basics here. You will have to do the rest.

First part: Network Propagation

The neural network propagation function is set by net=f(\sum\limits_{i=1}^n x_i.w_i + \theta_i.\theta_w)  where net is the output value of each neuron of the network and the f(x) is the activation function. For this implementation, I'll be using the sigmoid function f(x)=1/(1+e^{-x}) as the activation function. Please notice the training algorithm I am showing in this article is designed for this activation function.

Feed forward networks are composed by neurons and layers. So, to make this porting to source code easier, let's take the power of C++ classes and structures, and use them to represent each portion of the neural network with them.

Neural Network Data Structures

A feed forward network as many neural networks, is comprised by layers. In this case the backpropagation is a multi-layer network so we must find the way to implement each layer as a separated unit as well as each neuron. Let’s begin from the simplest structures to the complex ones.

Neuron Structure

The neuron structure should contain everything what a neuron represents:

  • An array of floating point numbers as the “synaptic connector” or weights
  • The output value of the neuron
  • The gain value of the neuron this is usually 1
  • The weight or synaptic connector of the gain value
  • Additionally an array of floating point values to contain the delta values which is the last delta value update from a previous iteration. Please notice these values are using only during training. See delta rule for more details on /backpropagation.html.
struct neuron
{
    float *weights; // neuron input weights or synaptic connections
    float *deltavalues; //neuron delta values
    float output; //output value
    float gain;//Gain value
    float wgain;//Weight gain value
    neuron();//Constructor
    ~neuron();//Destructor
    void create(int inputcount);//Allocates memory and initializates values
};

Layer Structure

Our next structure is the “layer”. Basically, it contains an array of neurons along with the layer input. All neurons from the layer share the same input, so the layer input is represented by an array of floating point values.

struct layer
{
    neuron **neurons;//The array of neurons
    int neuroncount;//The total count of neurons
    float *layerinput;//The layer input
    int inputcount;//The total count of elements in layerinput
    layer();//Object constructor. Initializates all values as 0
    ~layer();//Destructor. Frees the memory used by the layer
    void create(int inputsize, int _neuroncount);//Creates the layer and allocates memory
    void calculate();//Calculates all neurons performing the network formula
};

The “layer” structure contains a block of neurons representing a layer of the network. It contains the pointer to array of “neuron” structure the array containing the input of the neuron and their respective count descriptors. Moreover, it includes the constructor, destructor and creation functions.

The Neural Network Structure

class bpnet
{
private:
    layer m_inputlayer;//input layer of the network
    layer m_outputlayer;//output layer..contains the result of applying the network
    layer **m_hiddenlayers;//Additional hidden layers
    int m_hiddenlayercount;//the count of additional hidden layers
public:
//function tu create in memory the network structure
    bpnet();//Construction..initialzates all values to 0
    ~bpnet();//Destructor..releases memory
    //Creates the network structure on memory
    void create(int inputcount,int inputneurons,int outputcount,int *hiddenlayers,int hiddenlayercount);
    void propagate(const float *input);//Calculates the network values given an input pattern
    //Updates the weight values of the network given a desired output and applying the backpropagation
    //Algorithm
    float train(const float *desiredoutput,const float *input,float alpha, float momentum);
    //Updates the next layer input values
    void update(int layerindex);
    //Returns the output layer..this is useful to get the output values of the network
    inline layer &getOutput()
    {
        return m_outputlayer;
    }
};

The “bpnet” class represents the entire neural network. It contains its basic input layer, output layer and optional hidden layers.
Picturing the network structure it isn’t that difficult. The trick comes when implementing the training algorithm. Let’s focus in the primary function bpnet::propagate(const float *input) and the member function layer::calculate(); These functions what they do is to propagate and calculate the neural network output values. Function propagate is the one you should use on your final application.

Calculating the network values

Calculating a layer using the net=f(\sum\limits_{i=1}^n x_i.w_i + \theta_i.\theta_w)function

Our first goal is to calculate each layer neurons, and there is no better way than implementing a member function in the layer object to do this job. Function layer::calculate() shows how to implement this formula net=f(\sum\limits_{i=1}^n x_i.w_i + \theta_i.\theta_w) applied to the layer.

void layer::calculate()
{
    int i,j;
    float sum;
    //Apply the formula for each neuron
    for(i=0;i<neuroncount;i++)
    {
        sum=0;//store the sum of all values here
        for(j=0;j<inputcount;j++)
        {
        //Performing function
            sum+=neurons[i]->weights[j] * layerinput[j]; //apply input * weight
        }
        sum+=neurons[i]->wgain * neurons[i]->gain; //apply the gain or theta multiplied by the gain weight.
        //sigmoidal activation function
        neurons[i]->output= 1.f/(1.f + exp(-sum));//calculate the sigmoid function
    }
}

Calculating and propagating the network values

Function propagate, calculates the network value given an input. It starts calculating the input layer then propagating to the next layer, calculating the next layer until it reaches the output layer. This is the function you would use in your application. Once the network has been propagated and calculated you would only take care of the output value.

void bpnet::propagate(const float *input)
{
    //The propagation function should start from the input layer
    //first copy the input vector to the input layer Always make sure the size
    //"array input" has the same size of inputcount
    memcpy(m_inputlayer.layerinput,input,m_inputlayer.inputcount * sizeof(float));
    //now calculate the inputlayer
    m_inputlayer.calculate();
    update(-1);//propagate the inputlayer out values to the next layer
    if(m_hiddenlayers)
    {
        //Calculating hidden layers if any
        for(int i=0;i<m_hiddenlayercount;i++)
        {
            m_hiddenlayers[i]->calculate();
            update(i);
        }
    }
    //calculating the final statge: the output layer
    m_outputlayer.calculate();
}

Training the network

Finally, training the network is what makes the neural network useful. A neural network without training does not really do anything. The training function is what applies the backpropagation algorithm. I'll do my best to let you understand how this is ported to a program.

The training process consist on the following:

  • First, calculate the network with function propagate
  • We need a desired output for the given pattern so we must include this data
  • Calculate the quadratic error and the layer error for the output layer. The quadratic error is determined by where d_o^p,y_o^p are the desired and current output respectively
  • Calculate the error value of the current layer by .
  • Update weight values for each neuron applying the delta rule where \gamma is the learning rate constant \delta the layer error and y the layer input value. \alpha is the learning momentum and \Delta w is the previous delta value.
    The next weight value would be w(t+1)=w(t)_i +\Delta w(t+1)_i
  • Same rule applies for the hidden and input layers. However, the layer error is calculated in a different way.
    lerror_c=nout_c * (1-nout_c).\sum\limits_{i=1}^n lerror_l . w_l where lerror_l and w_l are the error and weight values from the previous processed layer. nout_c is the output of the neuron currently processed
//Main training function. Run this function in a loop as many times needed per pattern
float bpnet::train(const float *desiredoutput, const float *input, float alpha, float momentum)
{
    //function train, teaches the network to recognize a pattern given a desired output
    float errorg=0; //general quadratic error
    float errorc; //local error;
    float sum=0,csum=0;
    float delta,udelta;
    float output;
    //first we begin by propagating the input
    propagate(input);
    int i,j,k;
    //the backpropagation algorithm starts from the output layer propagating the error  from the output
    //layer to the input layer
    for(i=0;i<m_outputlayer.neuroncount;i++)
    {
        //calculate the error value for the output layer
        output=m_outputlayer.neurons[i]->output; //copy this value to facilitate calculations
        //from the algorithm we can take the error value as
        errorc=(desiredoutput[i] - output) * output * (1 - output);
        //and the general error as the sum of delta values. Where delta is the squared difference
        //of the desired value with the output value
        //quadratic error
        errorg+=(desiredoutput[i] - output) * (desiredoutput[i] - output) ;
        //now we proceed to update the weights of the neuron
        for(j=0;j<m_outputlayer.inputcount;j++)
        {
            //get the current delta value
            delta=m_outputlayer.neurons[i]->deltavalues[j];
            //update the delta value
            udelta=alpha * errorc * m_outputlayer.layerinput[j] + delta * momentum;
            //update the weight values
            m_outputlayer.neurons[i]->weights[j]+=udelta;
            m_outputlayer.neurons[i]->deltavalues[j]=udelta;
            //we need this to propagate to the next layer
            sum+=m_outputlayer.neurons[i]->weights[j] * errorc;
        }
        //calculate the weight gain
        m_outputlayer.neurons[i]->wgain+= alpha * errorc * m_outputlayer.neurons[i]->gain;
    }
    for(i=(m_hiddenlayercount - 1);i>=0;i--)
    {
        for(j=0;j<m_hiddenlayers[i]->neuroncount;j++)
        {
            output=m_hiddenlayers[i]->neurons[j]->output;
            //calculate the error for this layer
            errorc= output * (1-output) * sum;
            //update neuron weights
            for(k=0;k<m_hiddenlayers[i]->inputcount;k++)
            {
                delta=m_hiddenlayers[i]->neurons[j]->deltavalues[k];
                udelta= alpha * errorc * m_hiddenlayers[i]->layerinput[k] + delta * momentum;
                m_hiddenlayers[i]->neurons[j]->weights[k]+=udelta;
                m_hiddenlayers[i]->neurons[j]->deltavalues[k]=udelta;
                csum+=m_hiddenlayers[i]->neurons[j]->weights[k] * errorc;//needed for next layer
            }
            m_hiddenlayers[i]->neurons[j]->wgain+=alpha * errorc * m_hiddenlayers[i]->neurons[j]->gain;
        }
        sum=csum;
        csum=0;
    }
    //and finally process the input layer
    for(i=0;i<m_inputlayer.neuroncount;i++)
    {
        output=m_inputlayer.neurons[i]->output;
        errorc=output * (1 - output) * sum;
        for(j=0;j<m_inputlayer.inputcount;j++)
        {
            delta=m_inputlayer.neurons[i]->deltavalues[j];
            udelta=alpha * errorc * m_inputlayer.layerinput[j] + delta * momentum;
            //update weights
            m_inputlayer.neurons[i]->weights[j]+=udelta;
            m_inputlayer.neurons[i]->deltavalues[j]=udelta;
        }
        //and update the gain weight
        m_inputlayer.neurons[i]->wgain+=alpha * errorc * m_inputlayer.neurons[i]->gain;
    }
    //return the general error divided by 2
    return errorg / 2;
}

Sample Application

The complete source code can be found at the end of this article. I also included a sample application that shows how to use the class "bpnet" and how you may use it on an application. The sample shows how to teach the neural network to learn the XOR (or exclusive) gate.
There isn't much complexity to create any application.

#include <iostream>
#include "bpnet.h"
using namespace std;
#define PATTERN_COUNT 4
#define PATTERN_SIZE 2
#define NETWORK_INPUTNEURONS 3
#define NETWORK_OUTPUT 1
#define HIDDEN_LAYERS 0
#define EPOCHS 20000
int main()
{
    //Create some patterns
    //playing with xor
    //XOR input values
    float pattern[PATTERN_COUNT][PATTERN_SIZE]=
    {
        {0,0},
        {0,1},
        {1,0},
        {1,1}
    };
    //XOR desired output values
    float desiredout[PATTERN_COUNT][NETWORK_OUTPUT]=
    {
        {0},
        {1},
        {1},
        {0}
    };
    bpnet net;//Our neural network object
    int i,j;
    float error;
    //We create the network
    net.create(PATTERN_SIZE,NETWORK_INPUTNEURONS,NETWORK_OUTPUT,HIDDEN_LAYERS,HIDDEN_LAYERS);
    //Start the neural network training
    for(i=0;i<EPOCHS;i++)
    {
        error=0;
        for(j=0;j<PATTERN_COUNT;j++)
        {
            error+=net.train(desiredout[j],pattern[j],0.2f,0.1f);
        }
        error/=PATTERN_COUNT;
        //display error
        cout << "ERROR:" << error << "\r";
    }
    //once trained test all patterns
    for(i=0;i<PATTERN_COUNT;i++)
    {
        net.propagate(pattern[i]);
    //display result
        cout << "TESTED PATTERN " << i << " DESIRED OUTPUT: " << *desiredout[i] << " NET RESULT: "<< net.getOutput().neurons[0]->output << endl;
    }
    return 0;
}

Download the source as ZIP File here. Please notice this code is only for educational purposes and it's not allowed to use it for commercial purposes.
UPDATE: Source code is available on GitHub too here is the link https://github.com/danielrioss/bpnet_wpage

Book: Neural Networks Algorithms, Applications and Programming Techniques

Neural Network Book 1If you think learning all about neural networks is really easy, well I would tell you really need to give it some time and have patience to understand their complexity.

I always have tried to give you the best information in this website. But there is no other better way to learn everything than having a good book and learn all about it.

A good reading

Here is a book I would really like you to get. The book is Neural Networks, Algorithms, Applications and Programming Techniques. From all the books I have read I’d say this is the best, and it is something worthy to have in your hands.

Get the hard copy

If you search carefully you will find a lot of soft copies around. But in this case I really recommend you to get the hard copy of this book. Why? This is an invaluable resource that guides you through the learning process. As a matter of fact if you are an expert now, this book serves you as a guide and reference if by any reason you get stuck with everything.

Moreover, it is easier to read a hard copy than reading in a computer. Besides the softcopy that is on the net has a terrible quality.

Topics Covered and Structure

The structure of the book is really simple. The author James Freeman, explains everything in detail but with understandable words. It goes chapter by chapter explaining each neural network structure and at the end of each chapter he shows you how to create a software simulator for the network.

How to get this book

You can go to your local bookstore, if you are lucky you may get it there. Or you can buy it online; it only costs about $75 USD new. You can also find it used for much less like about $10 USD. That is really nothing compared to other books.

Artificial Neural Networks and their Role in Our Lives


By Andrew Ziegelstein

Formations of artificial neural networks completely alter the progression of human thought.Artificial neural networks (ANNs) represent models of information processors that resemble biological neural networks.While ANNs provide individuals more efficient ways of processing data, adverse results occur if machines interfere with human cognition.Development of artificial neural networks remains a fascinating element of scientific discovery, but this innovation brings about revolutionary changes that benefit and harm development of individuals’ intelligence.

Neural networks consist of cells known as neurons that transmit electrical impulses throughout the central nervous system.Individual neurons consist of dendrites, soma, axons, and myelin sheath.Dendrites receive signals from other neurons.The soma represents the cell body, protecting the neuron nucleus.Axons act as terminals for electrical impulses, with the myelin sheath acting as an insulator.Certain neurons perform specific tasks, such as transmitting signals from sensory or motor organs to the brain (Wood & Wood & Jones, 2006).Multiple neurons transmitting data for a specific purpose form a neural network. Modern scientists continue to improve on creating ANN models that duplicate the phenomena of biological neurons, enabling inventors to create machines that perform humanlike tasks.

According to psychologist George A. Miller, cognitive science began on September 11, 1956 at the Massachusetts Institute of Technology (MIT).The Symposium on Information Theory officially documented discussions about artificial intelligence.Students at DartmouthCollege began to develop programs to solve problems, recognize patterns, play games, and reason (Gardner, 1987).Scientists apply artificial neural networks in speech, image analysis, and robotics.Others utilize ANN setups to provide mathematical models of biological neural networks.Regardless of purpose, the creation of artificial neural networks remains complex.

Scientists mathematically define artificial neural networks by models similar to Figure 1.First, inputs (X0à Xp) combine with synapses of the neuron represented as weights.These weights transmit throughout the neuron, eventually combining at a “summing junction” ∑. Electronic impulses relay this compilation to the “activation function” that sends out the desired output.The equation in Figure 1b defines biological neurons’ processes through a mathematical function (Rios, 2007-2008).In a biological network, the “summing junction” represents the spinal cord, while the “activation function” serves as the brain.OutputYk represents the reaction the brain forces a person to perform.

While scientists know what artificial neural networks consist of, researchers disagree on whether or not there exists a “central planner” that collects information from any location in the system.Andy Clark states that in the human body, the brain represents a “central planner,” and experiments prove the organ’s ability to comprehend multiple sources of data.Clark notes that in certain ANNs, independent devices employ themselves in separate locations, each with an individual pathway to convert sensory inputs into actions.However, Clark believes that for an ANN to act like a human, there must be a centralized network similar to the brain.Humans connect sensory input together when hearing, touching, and seeing at the same time.Also, Clark disagrees with the theory that transmitting information to a centralized area like the brain adds significant lengths of time in computation.Through methods such as ballistic reaching, preset trajectories, and motor emulation, Clark believes that brains log repetitious actions, decreasing the time it takes for the brain to recognize what is occurring.When the brain identifies what it must do, the signal transmits to the output device.By employing Clark’s theories, researchers limit the time it takes for computers to correct errors, enhance the overall speed of the system, and enable the neural network to make more consistent output (Clark, 1997).

Discoveries in neuroscience lead to intriguing inventions in artificial intelligence and provide humans with computational power unrivaled in the past.As an example, computers prove theories proposed by mathematicians hundreds of years ago.Solutions to the approximate sum of an infinite series of numbers could only be deciphered by a writing utensil and paper.Only individuals blessed with minds like Newton or Einstein contemplated how to solve these problems, but today, ANNs aid all people with a scientific calculator in resolving an infinite series by hitting a few buttons.

Andy Clark, Director of the Cognitive Science Program at IndianaUniversity, describes how digital technology enables humans to ignore distance as a limiting factor of production.At DukeUniversity, researchers discovered patterns of neural signals throughout the brain of an owl monkey.Once documented, these patterns entered a computer that predicted the future movements of the neural networks.Signals from the monkey brain transmitted throughout the computer and controlled a robotic arm receiving the signal 600 miles away at the MIT Touch Lab.Dr. Mandayam Srinivasan, director of the MIT laboratory noted that the experiment provided the monkey brain with an arm 600 miles away (Clark, 2003).With this type of technology, organizations like NASA possess the ability to control probes in other areas of the solar system, and human knowledge bases extend further than their physical area.

Andy Clark views a cell phone as another link for a person to theoretically be in two places at once (2003).Hundreds of years ago, the actions of a human in one area would not effect a situation far away.Today, an individual can eat lunch, run their business in America, and deal with foreign import companies at the same time.While these artificial networks enhance the ability to focus on multiple projects, they divide the person’s attention span into separate places.This leads us to the detriments of artificial neural networks, specifically the fact that it separates humans from actual experience.

Technological advancement revolutionizes the lives of people across the world, but only time will tell whether or not discovery enables humans to advance.Our fear of Y2K did not stir enough controversy to scare people from reliance on electronics.If all computer systems crash, the world economy will collapse.However, the greatest controversy regarding the development of artificial neural networks in particular involves whether the progress limits the comprehension levels of human beings (Rieder, 2008).

For an artificial neural network to function, it seems that some sort of sensory mechanism must employ itself in the machine.According to German philosopher Immanuel Kant, “any phenomenon consists of sensations, which are caused by particular objects themselves.”These sensations help the mind create schemas, mental representations of objects or places (Gardner, 1987).Through computer imaging and digital technology, humans study realms of knowledge on scales larger and smaller than what the typical individual understands.Without technological aids, the concept of space would not exist.However, when humans rely on technology to study, they lose the direct connection to their project, creating only an abstract assumption of what truly goes on.In the experiment linking the monkey brain to the robotic arm, electronic impulses transferred from the monkey’s brain to the robot.This connection replaces human touch, eliminating that aspect of the human experience.Growing accustom to a strictly digital education results in a human’s inability to learn through varied methods, including a classroom environment.

The eventual goal of any project is growth, and students seek to expand their knowledge base by attending college.However, modern universities struggle to educate learners in lecture formats.Professor Michael Wesch of KansasStateUniversity created a research video with the assistance of two hundred students in his anthropology course.The video documented the survey results of 133 students at the institution, and the results seem to prove that college students resist reading and believe that coursework contains little to no relevance to their lives.Students claim that on average, they write 42 pages in papers each semester, but type 500 e-mails.Further support that electronics and digital technology prevent humans stems from students stating that they will view approximately 1,281 Facebook profiles in a semester, and read only 8 books in a year.The data seems to coincide with a theory presented over 1,500 years ago (Wesch, 2007).

According to Professor Wesch:

there is no question that the 200-seat lecture hall is, and always has been, an inferior model on which to base a system of teaching and learning…It is a rejection of the dialectic approach of the Academy, dominant until 529 AD when it was finally closed by the Emperor Justinian…It is an authoritarian continuation of the ‘Scholastic’ tradition founded by the early Church ‘schoolmen’ and continuing through the middle ages.It is a 9th century, not a 19th century environment…The problem is imposing a 16th –19th century epistemology on an ill-prepared digital-age mind (2007).

The results of the survey coincide with this theory, leaving no question as to why KansasStateUniversity students appear to lack interest in scholarly endeavors in classes with an average of over 100 students (Wesch, 2007).

In further support of why college students struggle, researchers must consider developmental differences throughout various generations.Generations within the last 100 years grew up turning to the radio and library for entertainment.Verbal and written communication served as the normal trigger to stimulate attention.Currently, infants are exposed to digital media from television and the computer.The idea of a biological neural network now comes into play, as flashing lights and sounds of television shows like “Dora the Explorer” signal the brain to pay attention.Wesch states that“[the] powerfully presentational structure of visual images, movement, light, color, and non-verbal sounds and music makes sustained propositional thought difficult for the students of the digital age (2007).”After encouraging a child to pay attention to the unique bells and whistles of “The Wiggles,” text will not trigger brain stimulation.Biological neural networks in modern age students rely on intricate visual and audio triggers to stimulate brain attention, whereas students from past generations rely on basic verbal and written communication.Perhaps prolonged exposure to digital age technology creates a new attention deficit disorder towards teachers’ words and blackboards.An Australian survey found that people who watch less than 1 hour of television per day do better on memory tests than those who watch more.“Couch potatoes,” individuals who watch television for hours on end, increase their risk of developing Alzheimer’s disease as the brain literally zones out (Tesh, 2008).While these facts present themselves repeatedly, artificial neural networks and digital media continue to impede upon development of biological neural systems.

As Professor Wesch implies, the same theories proposed prior to the year 500 AD apply to researching the quality of college institutions (2007).Some believe that universities with a low faculty to student ratio have below average educational programs.The personal attention given by scholars like Aristotle, Socrates, and Plato remains an important aspect of education.Justinian disowned the methods taught by the Academy because it was more important to industrialize education than actually create systems that enhanced enlightenment.Military societies did not need smart soldiers, so they rushed students through school and onto the battlefield.Obviously, modern college education represents commercial investment when hundreds of students cram into lecture halls where they have limited opportunities to learn.Ironically, our sophisticated digital systems make education more difficult for masses of people.

If Thomas Kuhn studied the change in educational success, he would note the paradigm shift in the functioning of human neural networks.Artificial neural networks transmit data in ways not previously experienced by humans, and while modern machines process information faster than ever before, the new technologies limit the abilities of biological neural networks.Even with these facts, artificial neural networks play a critical role in the development of humans, creating biological neural pathways that cannot comprehend education through classroom reading, listening, and writing.Only the future will tell whether ANNs lead to human success or failure.