I'm no kernel developer so I don't know. The dataset they used for training the neural net was a recorded dataset, so training could be done offline.
The specific predictor they created was integer based, so could be used in a non-floating point kernel:
We then use an SVM with the k-sparse binary feature. Since integer operations are much cheaper in hardware than floating point operations, we use an Integer SVM (ISVM) with an integer margin and learning rate of 1.
Again this highlights the interesting point of the paper, IMHO.
The specific predictor they created was integer based, so could be used in a non-floating point kernel:
We then use an SVM with the k-sparse binary feature. Since integer operations are much cheaper in hardware than floating point operations, we use an Integer SVM (ISVM) with an integer margin and learning rate of 1.
Again this highlights the interesting point of the paper, IMHO.