[EXPERT’TECH] How to cope with common difficulties while customizing your AI model ?

Posté le : 19/07/2017

Partager

As the Machine Learning is supposed to be adjusted for the needs of a specific tasks, it is natural that one may face common difficulties like overfitting of the models or simply loss of a model performance.

Here are some techniques applied by our team while working on a business case related to intellectual text analysis. It is important to mention that even if these techniques has proven to be efficient in our case it does not imply that they are universally applicable. However we are convinced that it may be quite helpful for our readers to get familiar with some more usecases where machine learning helps in analyzing user satisfaction from comments.

Question 1

What is our existing model ?

API Azure Cognitive service. Text analytics web services built with Azure Machine Learning. Advanced natural language processing techniques to deliver best in class predictions. (see Question 3)

 Question 2

How it works ?

According to the documentation provided by Microsoft : «The API returns a numeric score between 0 and 1. Scores close to 1 indicate positive sentiment, while scores close to 0 indicate negative sentiment. Sentiment score is generated using classification techniques. The input features to the classifier include n-grams, features generated from part-of-speech tags, and word embeddings ».

Currently, sentiment analysis is supported in English, Spanish, French, and Portuguese. 11 additional languages are available in preview. See the Text Analytics Documentation for details.

 Question 3

So Azure imposes to use only certain models (or algorithms), thus we have no flexibility while working with its AI tools?

Not at all. Actually, it is possible to construct the same model on Azure ML studio using classification algorithms (like neural nets or logistic regression). Moreover, you can construct your own evaluation model that may drastically increase the quality of your sentiment analysis.

Important ! Free version is limited to 1 hour per experiment.

 Question 4

Possible improvements :

Solution Manual Mode (Azure ML Studio) Automatic Mode (API cognitive services)
Improve Performance With Data Possible Partly Possible
Improve Performance With Algorithms. Possible Not possible
Improve Performance With Algorithm Tuning Possible ? (didn’t finish documenting API)
Improve Performance With Ensembles Possible Possible (but still tricky to adjust after comparing with  another model)
Sentence type classification using BiLSTM-CRF and CNN Possible Not possible

Question 5

Possible alternatives ?

Here is the list of ready-to-use solutions that may be easily installed and applied in your Azure workspace :

Name Domain Performance Pricing Description
NLTK Human language data No official data provided Free NLTK is a platform for building Python programs, provides interfaces to over 50 corpora and lexical resources such as WordNet, along with a suite of text processing libraries for classification, tokenization, stemming, tagging, parsing, and semantic reasoning, wrappers for industrial-strength NLP libraries, and an active discussion forum.
Key Extraction Algorithm [KEA] Keywords for auto-indexing purpose Good Free Used to describe the content of single documents and provide a kind of semantic metadata that is useful for a wide variety of purposes
Tree Tagger Annotating text with part-of-speech and lemma information Didn’t correspond to our needs (tested on first 50 rows of working data set and 10 rows as a training set) Free Has been successfully used to tag German, English, French, Italian, Dutch, Spanish, Bulgarian, Russian, Portuguese, Galician, Chinese, Swahili, Slovak, Slovenian, Latin, Estonian, Polish, Romanian, Czech, Coptic and old French texts and is adaptable to other languages if a lexicon and a manually tagged training corpus are available.

 Question 6

Conclusion

The main idea is to construct text classification via sentiment using Azure ML and Construct Model Evaluation (applying 70 to 30 percentage split in our case). This gives the developer a correctly evaluated machine learning model that may be later reused with a possibility to adjust algorithmic parameters like learning rate, number N for feature sampling etc.

Written by Alibek Jakupov

Contactez-nous Postuler Nos offres d'emploi