Black Friday deal

Optimizing QA models using Language Heuristics & Knowledge Distillation


Banner Image
  • Fine-tuned base BERT and DistilBERT models on SQUAD 2.0 question answering dataset.
  • Produced a comparative analysis report of performance metrics and processing time between BERT and DistilBERT.
  • Incorporated postprocessing of candidate answers, based on linguistic knowledge, to improve exact match scores.
  • Generated augmented set with synonym and random word replacement and compared its performance with the original set.
Language:
Are you a contestant for RMDS 2021 Data Science Competition?
Type: Other
Release Date: Apr 30, 2021
Last Updated: Apr 30, 2021

Average rating is 4.0 with 1 vote(s)


Please sign in or create an account to give a rating or comment.

Please sign in or create an account to view the download file