Solorio, ThamarParikh, Dwija2021-02-112021-02-112020-09-29https://hdl.handle.net/10657/7484Despite stellar performance on many NLP tasks, the behavior of neural models like BERT is not properly understood. We attempt to analyze the behavior and recognize patterns in errors for the NER task. We evaluate the predictions and errors generated to gain insight into the model's behavior Our findings show that there are underlying patterns leading to unintended memorization. Future research is required to address these errors and fine-tune the model.en-USThe author of this work is the copyright owner. UH Libraries and the Texas Digital Library have their permission to store and provide access to this work. Further transmission, reproduction, or presentation of this work is prohibited except with permission of the author(s).Analyzing Errors of Neural Models in Named Entity RecognitionPoster