📑

AI Paper Research

AI 논문 조사 및 정리

Foundations
자연어처리Natural Language Processing
Training language models to follow instr...Scaling Instruction-Finetuned Language M...
BART: Denoising Sequence-to-Sequence Pre...DeBERTa: Decoding-enhanced BERT with Dis...
Exploring the Limits of Transfer Learnin...XLNet: Generalized Autoregressive Pretra...RoBERTa: A Robustly Optimized BERT Pretr...
Deep contextualized word representations
GloVe: Global Vectors for Word Represent...
Efficient Estimation of Word Representat...
홈/자연어처리/2019

자연어처리 — 2019

3편의 논문

JMLR 202018,000+

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

통합 텍스트-투-텍스트 트랜스포머를 통한 전이학습의 한계 탐구

Colin Raffel, Noam Shazeer, Adam Roberts et al. (2019)

NeurIPS 20198,000+

XLNet: Generalized Autoregressive Pretraining for Language Understanding

XLNet: 언어 이해를 위한 일반화된 자기회귀 사전학습

Zhilin Yang, Zihang Dai, Yiming Yang et al. (2019)

arXiv18,000+

RoBERTa: A Robustly Optimized BERT Pretraining Approach

RoBERTa: 강건하게 최적화된 BERT 사전학습 접근법

Yinhan Liu, Myle Ott, Naman Goyal et al. (2019)

← 자연어처리 전체