The Fox Magazine

Daily Inspiration:

Dream Bigger
With Us.

Let's Get Social

    The Brief and Fascinating History of Machine Learning

    The Brief and Fascinating History of Machine Learning

    Science fiction loves to imagine a supercomputer capable of superhuman levels of intelligence — but thanks to advancements in machine learning, we’re already there.

    Machine learning is a branch of computer science in which programmers create algorithms to simulate the human ability to learn. Already, machine learning is everywhere, from data analysis tools for business to procedurally generated maps for video games, and machine learning is projected to advance by leaps and bounds in the coming years, helping to automate even more services to make certain jobs significantly faster and easier.

    There is a lot to look forward to in the field of machine learning — but there is also a lot to reflect on. Machine learning is a relatively new field, but its past is exhilarating to those interested in rapid development and the future of tech.

    Roots in Neuroscience

    It makes sense that when tasked with creating a machine capable of adapting to new data, humans would use their own biological computer as inspiration. Machine learning has its roots in neuroscience, a field dedicated to understanding how human brains function. In 1949, psychologist Donald O. Hebb published his theoretical models on communication between neurons in “The Organization of Behavior.” Hebb’s models suggest that two nodes in a neural network will develop stronger bonds when they are activated at the same time. Not only have these models been supported by recent neuroscientific research, but they form the foundation of machine learning as we know it today.

    Computer Checkers

    The first person to use the term “machine learning” was Arthur Samuel, a developer working for IBM in the 1950s. Samuel created a computer program capable of winning a game of checkers by applying a strategy for scoring different moves based on the position of the pieces. The computer’s goal was to minimize the possibility of loss and maximize gains — and thus the innovative bit of programming evolved into the minimax algorithm, which helps computers make decisions despite uncertainty.

    The Perceptron

    In 1957, at the Cornell Aeronautical Laboratory, psychologist Frank Rosenblatt combined Hebb’s models of brain cell interaction and Samuel’s checkers algorithms to develop the first machine capable of understanding new information and altering behavior. Rosenblatt called his machine the perceptron.

    The Mark 1 perceptron was a custom-build machine designed for image recognition, but the program was designed to be available for use in other machines, especially the IBM 704, the first mass-produced computer. Unfortunately, expectations surrounding the perceptron were greater than the machine’s capabilities, and because the perceptron struggled to recognize more complex visual patterns, like human faces, funding for the perceptron project and machine learning in general started to dry up.

    Multilayered Neural Networks

    Computer scientists continued working on machine learning algorithms through the 1960s and ‘70s, despite a lack of public and financial support. Fortunately, during this period, many significant innovations drove the field forward, to include the use of multiple layers in neural networks.

    Layers help machine learning programs organize data, which gives them more opportunity to make decisions. Most neural networks have three layers: an input layer, a hidden layer and an output layer. The input layer is what accepts the new information, and the output layer is where the program determines a result or makes a decision. The hidden layer — which can comprise multiple layers — performs many tasks to help the program decipher the data and learn from it.

    Using multiple layers, computer scientists are able to train machines to respond to certain types of data in certain ways, which allows for machines to complete exceedingly complex tasks. Today, hidden layers within a neural network can be even more adept at recognizing certain kinds of patterns than human brains, which is why machine learning is becoming such an invaluable tool.

    Modern ML

    A machine-learning renaissance in the 1990s accelerated the development of machine learning, essentially creating the modern environment of machine learning tools. Experts created a number of algorithms essential to compelling machines to act on their own, without additional programming: boosting algorithms for reducing bias, feedforward and backpropagation for training layers, speech and facial recognition and more. Currently, machine learning itself is the foundation for dozens of exciting new technologies and computer science concepts. Business leaders can begin harnessing the power of machine learning by taking short courses online focused on applying machine learning tools and principles to business.

    Psychologists, computer scientists, programmers and business leaders have taken inspiration from the powerful human brain to develop machine learning tools. Now, as we apply these tools to our existing processes, we will inevitably see incredible results in business and society.

    1 Comment

    Post a Comment

    The Brief and Fascin…

    by Zoe Perry Time to read this article: 10 min
    1