Sequence Tagging & Named Entity Recognition
Preprocessed training set, built N-gram language model, extract transition and emission parameters of Hidden Markov Model (HMM), applied Viterbi algorithm to extract the optimal tag sequence, achieved best F1 score as 0.286.
Applied Maximum Entropy Markov Model (MEMM) to build a log-linear tagger, designed features manually, used Viterbi algorithm to extract the optimal sequence, achieved best F1 score as 0.398.
Applied Bi-Long-Short Term Memory (Bi-LSTM) with Conditional Random Field (CRF), achieved best F1 score as 0.625.
|
|