일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | ||
6 | 7 | 8 | 9 | 10 | 11 | 12 |
13 | 14 | 15 | 16 | 17 | 18 | 19 |
20 | 21 | 22 | 23 | 24 | 25 | 26 |
27 | 28 | 29 | 30 |
Tags
- Attention Is All You Need
- ImageNet Classification with Deep ConvolutionalNeural Networks 리뷰
- CNN 논문리뷰
- T5 논문 리뷰
- BERT 사용방법
- Evaluate Multiwoz
- Zero-shot Generalization in Dialog State Tracking through GenerativeQuestion Answering
- hugging face tokenizer에서 special case 추가하기
- The Natural Language Decathlon:Multitask Learning as Question Answering
- BERT란
- Multi Task Learning Objectives for Natural Language Processing 리뷰
- Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
- attention 설명
- 바닥부터 배우는 강화 학습
- BART 논문리뷰
- A Neural Attention Model for Abstractive Sentence Summarization
- TOD 논문리뷰
- MMTOD
- 다양한 모듈에서 log쓰기
- Multi Task Learning Objectives for Natural Language Processing
- 길찾기
- 정책기반 agent
- BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding 리뷰
- 뉴텝스 400
- BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding 논문리뷰
- UBAR: Towards Fully End-to-End Task-Oriented Dialog System with GPT-2
- Attention Is All You Need 리뷰
- Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer 리뷰
- RuntimeError: DataLoader worker (pid(s) ) exited unexpectedly
- NLP 논문 리뷰
Archives
- Today
- Total
one by one ◼◻◼◻
hugging face tokenizer에서 special case 추가하기 본문
아래 코드대로 하면 tokenize 해도 [C1] 은 tokenize되지 않고 유지된다.
special_tokens_dict = {'additional_special_tokens': ['[C1]','[C2]','[C3]','[C4]']}
num_added_toks = tokenizer.add_special_tokens(special_tokens_dict)
model.resize_token_embeddings(len(tokenizer))
'유용한 기억' 카테고리의 다른 글
python logger사용하기 (2) | 2021.11.12 |
---|
Comments