# Paper

275개의 포스트
post-thumbnail

[논문 리뷰] CoVisPose: Co-Visibility Pose Transformer for Wide-Baseline Relative Pose Estimation in 360◦ Indoor Panoramas

2개의 파노라마 이미지가 input으로 주어질 때 relative pose를 추정하는 연구Featureless region이 많거나 유사한 구조가 많은 이미지의 경우 feature-based 방식은 잘 작동하지 못하며 이를 보완하기 위해 denser RGB나 RGB-D

2023년 1월 17일
·
0개의 댓글
·
post-thumbnail

[논문 리뷰] Learning Superpoint Graph Cut for 3D Instance Segmentation

3D point cloud를 input으로 받아 instance segmentation을 하는 task로 기존의 detection-based, clustering-based 방식으로는 복잡한 geometric structure에서 잘 작동하지 못한다.본 논문에서는 su

2023년 1월 16일
·
0개의 댓글
·
post-thumbnail

[Medical AI]SRGAN(ing)

논문 바로가기 Abstract >- cnn을 이용한 super-resolution의 정확도와 속도에도 불구하고, texture details이 떨어지는 현상 해결 by minimizing MSE, to optimize super resolution method >> high peak signal-to-noise ratios(PSNR : super re...

2023년 1월 15일
·
0개의 댓글
·
post-thumbnail

[Paper] Neural Machine Translation by jointly Learning to Align and Translate

참고 출처\[Paper Review] Neural Machine Translation by Jointly Learning to Align and Translate15\. 어텐션 메커니즘 (Attention Mechanism)\[NLP | 논문리뷰] NEURAL MACH

2023년 1월 6일
·
0개의 댓글
·

GAP: Differentially private Graph Neural Networks with Aggregation Perturbation

Differentially private GNN based on Aggregation Perturbation (GAP),

2023년 1월 5일
·
0개의 댓글
·
post-thumbnail

Luna: Linear Unified Nested Attention 요약 리뷰

Luna: Linear Unified Nested Attention, NeurIPS 2021

2023년 1월 2일
·
0개의 댓글
·
post-thumbnail

Embarrassingly Shallow Autoencoders for Sparse Data (EASE) - 논문 정리

논문에서 소개한 EASE의 특징hidden layer 없다autoencoder로 구성convex training objective의 closed-form solution을 도출weight matrix의 대각선은 0Input : user가 상호작용한 itemoutput(

2022년 12월 27일
·
0개의 댓글
·
post-thumbnail

[Language Paper Review] Robust Speech Recognition via Large-Scale Weak Supervision

[Language Paper Review] Robust Speech Recognition via Large-Scale Weak Supervision (a.k.a. Whisper)

2022년 12월 22일
·
0개의 댓글
·
post-thumbnail

[Language Paper Review] Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

[Language Paper Review] Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer (a.k.a. T5)

2022년 12월 22일
·
0개의 댓글
·
post-thumbnail

[Language Paper Review] BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension

[Language Paper Review] BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension (a.k.a. BART

2022년 12월 22일
·
0개의 댓글
·
post-thumbnail

[Language Paper Review] RoBERTa: A Robustly Optimized BERT Pretraining Approach

[Language Paper Review] RoBERTa: A Robustly Optimized BERT Pretraining Approach (a.k.a. RoBERTa)

2022년 12월 22일
·
0개의 댓글
·
post-thumbnail

[Language Paper Review] XLNet: Generalized Autoregressive Pretraining for Language Understanding

[Language Paper Review] XLNet: Generalized Autoregressive Pretraining for Language Understanding (a.k.a. XLNet)

2022년 12월 22일
·
0개의 댓글
·
post-thumbnail

[Language Paper Review] MASS: Masked Sequence to Sequence Pre-training for Language Generation

[Language Paper Review] MASS: Masked Sequence to Sequence Pre-training for Language Generation (a.k.a. MASS)

2022년 12월 22일
·
0개의 댓글
·
post-thumbnail

[Language Paper Review] Multi-Task Deep Neural Networks for Natural Language Understanding

[Language Paper Review] Multi-Task Deep Neural Networks for Natural Language Understanding (a.k.a. MT-DNN)

2022년 12월 22일
·
0개의 댓글
·
post-thumbnail

[Language Paper Review] Language Models are Few-Shot Learner

[Language Paper Review] Language Models are Few-Shot Learner (a.k.a. GPT-3)

2022년 12월 22일
·
0개의 댓글
·
post-thumbnail

[Language Paper Review] Language Models are Unsupervised Multitask Learners

[Language Paper Review] Language Models are Unsupervised Multitask Learners (a.k.a. GPT-2)

2022년 12월 22일
·
0개의 댓글
·
post-thumbnail

[Language Paper Review] BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

[Language Paper Review] BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding (a.k.a. BERT)

2022년 12월 22일
·
0개의 댓글
·
post-thumbnail

[Language Paper Review] Improving Language Understanding by Generative Pre-Training

[Language Paper Review] Improving Language Understanding by Generative Pre-Training (a.k.a. GPT-1)

2022년 12월 22일
·
0개의 댓글
·
post-thumbnail

[Language Paper Review] Attention Is All You Need

[Language Paper Review] Attention Is All You Need (a.k.a. Transformer)

2022년 12월 22일
·
0개의 댓글
·
post-thumbnail

[Language Paper Review] Effective Approaches to Attention-based Neural Machine Translation

[Language Paper Review] Effective Approaches to Attention-based Neural Machine Translation (a.k.a. Luong Attention)

2022년 12월 22일
·
0개의 댓글
·