NER with BERT(code)

Dongbin Lee·2021년 2월 12일
0

AI study

목록 보기
4/6

https://medium.com/@yingbiao/ner-with-bert-in-action-936ff275bc73
이전의 sentiment analysis with huggingface BERT의 글과 큰 차이는 없지만, 단계별 action구분이 잘 되어 있어 후에 참고용으로 기록해두려고 한다.
전체코드는 여기에서 확인가능.

NER

  • Named Entity Recongnition

  • 일반적인 NLP작업

  • 문장의 중요한 정보를 추출하기 위해 미리 정의 된 일부 태그를 기반으로 문장의 단어에 태그를 지정하는 것

  • multi-classification, CRF(Conditional Random Field)에서 주로 접근하게 된다.

Action

다음과 같은 4단계로 진행이된다.

1. Load data

import pandas as pd

data_path = "data/" 
data_file_address = "data/ner_dataset.csv"

# Fillna method can make same sentence with same sentence name
df_data = pd.read_csv(data_file_address,sep=",",encoding="latin1").fillna(method='ffill')

# Have a look at the dataset
df_data.head(n=20)

# Have a look TAG cat
df_data.Tag.unique()

# Analyse the Tag distribution
df_data.Tag.value_counts()

class SentenceGetter(object):
    
    def __init__(self, data):
        self.n_sent = 1
        self.data = data
        self.empty = False
        agg_func = lambda s: [(w, p, t) for w, p, t in zip(s["Word"].values.tolist(),
                                                           s["POS"].values.tolist(),
                                                           s["Tag"].values.tolist())]
        self.grouped = self.data.groupby("Sentence #").apply(agg_func)
        self.sentences = [s for s in self.grouped]
    
    def get_next(self):
        try:
            s = self.grouped["Sentence: {}".format(self.n_sent)]
            self.n_sent += 1
            return s
        except:
            return None

# Get full document data struce
getter = SentenceGetter(df_data)

# Get sentence data
sentences = [[s[0] for s in sent] for sent in getter.sentences]
sentences[0]

# Get tag labels data
labels = [[s[2] for s in sent] for sent in getter.sentences]
print(labels[0])

2. Set data into training embeddings

Token embedding

from keras.preprocessing.sequence import pad_sequences

# Make text token into id
input_ids = pad_sequences([tokenizer.convert_tokens_to_ids(txt) for txt in tokenized_texts],
                          maxlen=max_len, dtype="long", truncating="post", padding="post")
print(input_ids[0])

Attention Mask word embedding


# Real token mask is 1,pad token(meaning a place holder for the empty space) is 0
attention_masks = [[int(i>0) for i in ii] for ii in input_ids]
attention_masks[0]

3. Train model

# It's highly recommended to download bert prtrained model first, then save them into local file 
# Use the cased verion for better performance
model_file_address = 'data/bert-base-cased'

# Will load config and weight with from_pretrained()
model = BertForTokenClassification.from_pretrained(model_file_address,num_labels=len(tag2idx))

4. Evaluate model performance

profile
Preparation student who dreams of becoming an AI engineer.

0개의 댓글