Deep Learning Tasks and Models

Machine Learning Tasks and Model Evaluation


Machine learning is a subject where we study how to create & evaluate machine learning models. To create these models, we need different types of data. We build models which can help us do various kinds of tasks. There are hundreds of model building techniques and researchers keep adding new techniques, and architectures as when need arises. But, the question is how do you evaluate these models which are output of the model trainings? To evaluate the performance of a model on structured data, or classification/regression/clustering models, we require one kind of metrics. But this becomes complicated when we are dealing with voice, text and audio data. How do you evaluate ten models which are responsible for translation, or locating an object in the image, transcribing voice into text, captioning an image? To solve this problem, standard databases are created and everyone needs to demonstrate the performance of their model, architecture, or approach against that dataset. But, even if you have a baseline dataset, how will you evaluate various NLP or deep learning tasks? For that GLUE, SuperGLUE benchmarks are created.

What is GLUE Benchmark?

The GLUE (General Language Understanding Evaluation) benchmark is a collection of diverse natural language processing (NLP) tasks designed to evaluate and compare the performance of various machine learning models and techniques in understanding and processing human language. It serves as a standard evaluation framework for assessing the general language understanding capabilities of different models.

The GLUE benchmark was introduced in 2018 and consists of a set of nine different NLP tasks, covering a wide range of language understanding tasks including sentence classification, sentence similarity, natural language inference, and question answering. Some of the tasks included in GLUE are the Stanford Sentiment Treebank, Multi-Genre Natural Language Inference, and the Recognizing Textual Entailment dataset.

The primary goal of the GLUE benchmark is to encourage the development of models that can perform well across multiple NLP tasks, thus demonstrating a more comprehensive understanding of human language. The performance of models is measured using a single metric called the GLUE score, which is computed by aggregating the performance of models on individual tasks.

The GLUE benchmark has been instrumental in advancing the field of NLP and has served as a benchmark for many state-of-the-art models, including various transformer-based architectures like BERT (Bidirectional Encoder Representations from Transformers) and RoBERTa.

It’s worth noting that since the introduction of the GLUE benchmark, other benchmarks like SuperGLUE and XTREME have been developed to address some limitations and provide more challenging evaluation tasks for further advancing the state of NLP research.


What is SuperGLUE Benchmark?

The SuperGLUE (Super General Language Understanding Evaluation) benchmark is an enhanced version of the GLUE benchmark introduced to address its limitations and provide more challenging language understanding tasks for evaluating and comparing the performance of natural language processing (NLP) models. SuperGLUE builds upon the success of GLUE and aims to push the boundaries of NLP research further.

SuperGLUE was introduced in 2019 as an extension of GLUE, consisting of a more diverse and difficult set of language understanding tasks. It includes a total of eight challenging tasks, including tasks like BoolQ (Boolean Questions), COPA (Choice of Plausible Alternatives), and RTE (Recognizing Textual Entailment), among others. These tasks are carefully designed to require more advanced reasoning and understanding abilities from models.

The primary objective of SuperGLUE is to evaluate models on a more comprehensive set of tasks that demand higher levels of language comprehension and reasoning capabilities. It provides a broader and more challenging evaluation platform to assess the progress and performance of NLP models beyond what was covered by the original GLUE benchmark.

Similar to GLUE, SuperGLUE also utilizes a single evaluation metric called the SuperGLUE score to assess model performance across the different tasks. The SuperGLUE benchmark has spurred further research and development in the field of NLP, pushing for advancements in model architectures, training techniques, and performance improvements.

SuperGLUE has become a prominent benchmark for evaluating the state-of-the-art NLP models, building on the success of GLUE and encouraging the development of more sophisticated models that can tackle complex language understanding tasks.

It’s important to note that the SuperGLUE benchmark, while providing more challenging tasks, is still evolving, and researchers continue to work on expanding and refining the benchmark to further push the boundaries of NLP research.


What is XTREME Benchmark?

The XTREME (Cross-lingual TRansfer Evaluation of Multilingual Encoders) benchmark is a comprehensive evaluation framework introduced in 2020 for assessing the performance of multilingual models in natural language understanding (NLU) tasks across multiple languages. It aims to evaluate the generalization and transfer capabilities of models in cross-lingual settings.

XTREME was developed as an extension of previous benchmarks like GLUE and SuperGLUE, with a specific focus on evaluating models’ abilities to understand and process languages beyond English. It includes a diverse range of tasks spanning multiple languages, such as named entity recognition, part-of-speech tagging, machine translation, sentence classification, and question answering, among others. Tasks on xtreme bechmark

  • Sentence-pair Classification
  • Structured Prediction
  • Question Answering
  • Sentence Retrieval

The main objective of the XTREME benchmark is to encourage the development of models that can effectively transfer knowledge across different languages, leveraging pretraining on large-scale multilingual data. By evaluating models on a wide range of languages and tasks, XTREME provides insights into the cross-lingual transfer capabilities and identifies areas for improvement in multilingual NLU.

Similar to GLUE and SuperGLUE, XTREME utilizes a single metric called the XTREME score to assess the performance of models across the various tasks and languages. The XTREME benchmark serves as an important evaluation platform for advancing research and development in multilingual NLU, fostering the development of models that can effectively handle language diversity and facilitate cross-lingual understanding.

XTREME has gained significant attention and has been instrumental in driving progress in multilingual NLU, pushing researchers to develop models that exhibit strong cross-lingual transfer capabilities and perform well across a wide range of languages and tasks. The benchmark continues to evolve and expand to include additional languages, tasks, and evaluation metrics to further enhance the evaluation of multilingual models.


NLP and Deep Learning Tasks

Below is list of different NLP and Deep Learning tasks for which different benchmark datasets are created and model’s perormance is measured against those tasks.

GLUE & SuperGLUE tasks

Sno Task Corpus Paper GLUE SuperGLUE
1 Sentence acceptability judgment CoLA (Corpus of Linguistic Acceptability) CoLA Warstadt et al., 2018 Yes No
2 Sentiment analysis SST-2 (Stanford Sentiment Treebank) SST-2 Socher et al., 2013 Yes No
3 Paraphrasing/sentence similarity MRPC (Microsoft Research Paraphrase Corpus) MRPC Dolan and Brockett, 2005 Yes No
4   STS-B (Semantic Textual Similarity Benchmark) STS-B Ceret al., 2017 Yes No
5   QQP (Quora Question Pairs) QQP Iyer et al., 2017 Yes No
6 Natural language inference MNLI (Multi-Genre Natural Language Inference) MNLI Williams et al., 2017 Yes No
7   QNLI (Question-answering Natural Language Inference) QNLI Rajpurkar et al.,2016 Yes No
8   RTE (Recognizing Textual Entailment) RTE Dagan et al., 2005 Yes Yes
9   CB (CommitmentBank) CB De Marneff et al., 2019 No Yes
10 Sentence completion COPA (Choice of Plausible Alternatives) COPA Roemmele et al., 2011 No Yes
11 Word sense disambiguation WiC (Word-in-Context) WIC Pilehvar and Camacho-Collados, 2018 No Yes
12   WNLI (Winograd Natural Language Inference) Yes No
13 Question answering MultiRC (Multi-sentence Reading Comprehension) MultiRC Khashabi et al., 2018 No Yes
14   ReCoRD (Reading Comprehension with Commonsense Reasoning Dataset) ReCoRD Zhang et al., 2018 No Yes
15   BoolQ (Boolean Questions) BoolQ Clark et al., 2019 No Yes
16 Common Sense Reasoning WSC (Winograd Schema Challenge) No Yes
17   AX-b : Broadcoverage Diagnostic - Mathew’s Corr No Yes
18   AX-g : Winogender Shema Diagnostic Gender Parity - Accuracy No Yes

Deep Learning Tasks & Models on Huggingface (100K Models)

Computer Vision Models, 6000+ Models

1 Depth Estimation
2 Image Classification
3 Image Segmentation
4 Image-to-Image
5 Object Detection
6 Video Classification
7 Unconditional Image Generation
8 Zero-Shot Image Classification

Natural Language Processing Models, 65000+ Models

1 Conversational
2 Fill-Mask
3 Question Answering
4 Sentence Similarity
5 Summarization
6 Table Question Answering
7 Text Classification
8 Text Generation
9 Token Classification
10 Translation
11 Zero-Shot Classification

Audio Models, 10000+ Models

1 Audio Classification
2 Audio-to-Audio
3 Automatic Speech Recognition
4 Text-to-Speech
5 Tabular
6 Tabular Classification
7 Tabular Regression

Multimodal Models, 9000+ Models

1 Document Question Answering
2 Feature Extraction
3 ​Image-to-Text
4 Text-to-Image
5 Text-to-Video Contribute
6 Visual Question Answering

Reinforcement Learning, 22000+ Models