Worth a watch, and a laugh or two —
“Google’s artificial intelligence company, DeepMind, has developed an AI that has managed to learn how to walk, run, jump, and climb without any prior guidance. The result is as impressive as it is goofy.”
Worth a watch, and a laugh or two —
“Google’s artificial intelligence company, DeepMind, has developed an AI that has managed to learn how to walk, run, jump, and climb without any prior guidance. The result is as impressive as it is goofy.”
Machine learning can be a powerful tool in the creation of predictive models. But it doesn’t provide a magic bullet. In the end, effective machine learning works very much like other high-value human endeavors. It requires experimentation, evaluation, lots of work, and a measure of hard-earned wisdom.
As Kaggle Competitions Grandmaster Marios Michailidis (AKA KazAnova) explains:
No model is perfect. Almost every time the models make mistakes. Plus, each model has different advantages and disadvantages and they tend to seize the data from different angles. Leveraging the uniqueness of each model is of the essence for building very predictive models.
To help with this process, David H. Wolpert introduced the concept of stacked generalization in a 1992 paper.
Michailidis explains the process as follows:
Stacking or Stacked Generalization … normally involves a four-stage process. Consider 3 datasets A, B, C. For A and B we know the ground truth (or in other words the target variable y). We can use stacking as follows:
As part of his own PhD work, Michailidis developed a software stack, named StackNet to speed up the process.
Marios Michailidis describes StackNet in this way:
StackNet is a computational, scalable and analytical framework implemented with a software implementation in Java that resembles a feedforward neural network and uses Wolpert’s stacked generalization in multiple levels to improve accuracy in classification problems. In contrast to feedforward neural networks, rather than being trained through back propagation, the network is built iteratively one layer at a time (using stacked generalization), each of which uses the final target as its target.
StackNet is available in GitHub under the MIT license.
Be sure to read the interview with Michailidis about stacking and StackNet on the Kaggle blog, here.