Problem of continuous training - Supervised learning

Data Science Asked by Sandeep Bhutani on August 19, 2020

I am sure this is a most common problem, but would like to know by experts on how to tackle it. Note that, I mostly deal with textual data (NLP problems).
When a supervised learning model is created, say a text classifier, and it works well on seen data then we deploy the model in production (you can think of a chatbot also).

But in real time, when new type of data comes where the prediction fails, we find that a new word or new pattern is breaking the model. So we go ahead and retrain the model with new encountered data. This is where the continuous learning problem starts.

Can ML/NLP veterans please suggest some alternates to solve this labor work? Following approaches have been tried and the problems also listed:

  • We simply can’t train with new data infinitely. As production systems should be self healing. We cant put the cost of a human admin constantly monitoring the project. Also, it is practically not possible to get huge domain data during model training phase.
  • Use of advanced embeddings, and SoTA models like BERT. (Problem: The accuracy of these models is too hard to control)
  • Synthetic data generation/data augmentatoin. (Problem : Does not work well in case of NLP problems. Refer: training-with-less-data )
  • Unsupervised classification (Problem: Does not work well on closed domain problems, as most unsupervised models are either statistical which give a fair value of accuracy but not decent , or are trained on public domain data)
  • Reinforcement learning. (Problem: Real world NLP data is not labeled unlike a self driven car where the feedback is instant)

2 Answers

What you are describing is called auto-adaptive learning. This is what most recommendation systems use to adapt to ever changing data and feedback. It is also known as autoML. This Article does a good job of explaining it. Based on what your data looks like, you might have to choose the appropriate retraining strategy and do a staggered deployment.

Answered by tehem on August 19, 2020

One Solution is "Human in the Loop" with Sentence Encoder. You can use hybrid approach using cosine similarity + Topic modelling + fuzzywuzzy + Bert. I totally understand the NLP world and the kind of problem you are asking. There is no single straight through solutions. And then use voting mechanism to filter out the best resolution.

Answered by Syenix on August 19, 2020

Add your own answers!

Related Questions

Regressive CNN visualization

0  Asked on December 3, 2020 by goodcow


Training NLP with multiple text input features

3  Asked on December 3, 2020 by carl-molnar


Unsupervised Sentimental Analysis in R

1  Asked on December 3, 2020 by user100780


Ask a Question

Get help from others!

© 2022 All rights reserved. Sites we Love: PCI Database, MenuIva, UKBizDB, Menu Kuliner, Sharing RPP