Abstract:
Verifying a claim/statement using facts as evidence can be challenging, especially
when the evidence consists of multiple sentences, making it difficult for NLP models
to understand long-range dependencies. Most of the existing datasets provide claims
that can be verified by single-hop reasoning i.e., relevant evidence to support or deny
the claim can be found in a single evidence source. But the task becomes substantially
challenging when it is required to mine evidence from multiple sources to correctly
reach a verdict about the claim. Successful methods in single-hop verification task
struggle to perform at a higher level when provided with a claim that requires multihop
evidence in order to be verified. In light of the success of prompt learning in various
NLP applications, this thesis introduces prompt learning for the multi-hop claim veri fication task. Through extensive experimentation, our proposed prompt-based method,
which employs manually constructed prompts, has yielded promising results. By
fine-tuning language models with prompts, we have achieved an accuracy of 83.9%,
along with an enhanced cross-domain generalization performance. Additionally, we
conducted experiments in few-shot and zero-shot settings, which demonstrated that
prompt-based methods outperformed traditional supervised learning techniques that
rely on the fine-tuning paradigm. These results underscore the effectiveness of prompt
learning in the realm of claim verification
Description:
Supervised by
Dr. Md. Azam Hossain,
Assistant Professor,
Department of Computer Science and Engineering(CSE),
Islamic University of Technology(IUT),
Board Bazar, Gazipur-1704, Bangladesh