📑

AI Paper Research

AI 논문 조사 및 정리

Foundations
AI 안전성·정렬AI Safety & Alignment
Sleeper Agents: Training Deceptive LLMs ...
Towards Monosemanticity: Decomposing Lan...Representation Engineering: A Top-Down A...Weak-to-Strong Generalization: Eliciting...
Constitutional AI: Harmlessness from AI ...Training a Helpful and Harmless Assistan...Red Teaming Language Models to Reduce Ha...
TruthfulQA: Measuring How Models Mimic H...
Certified Adversarial Robustness via Ran...
Explaining and Harnessing Adversarial Ex...
홈/AI 안전성·정렬/2014

AI 안전성·정렬 — 2014

1편의 논문

ICLR 201515,000+

Explaining and Harnessing Adversarial Examples

적대적 예제의 설명과 활용

Ian J. Goodfellow, Jonathon Shlens, Christian Szegedy (2014)

← AI 안전성·정렬 전체