일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | |||||
3 | 4 | 5 | 6 | 7 | 8 | 9 |
10 | 11 | 12 | 13 | 14 | 15 | 16 |
17 | 18 | 19 | 20 | 21 | 22 | 23 |
24 | 25 | 26 | 27 | 28 | 29 | 30 |
31 |
- 대학원 일상
- 논문 요약
- res paper
- grefcoco dataset
- 객체 검출
- grefcoco
- 2호선 완주
- gsoc 2025
- vlm
- res
- 2호선 따라걷기
- gsoc midterm evaluations
- 에지 검출
- 딥러닝 엔트로피
- 기계학습
- 원격 학습 안끊기게
- clip
- clip adapter
- blip-2
- gsoc 후기
- 이미지 필터링
- 딥러닝 목적함수
- 논문 리뷰
- mobilenetv1
- referring expression segmentation
- gres
- 1차 미분 마스크
- Object detection article
- object detection
- 엔트로피란
- Today
- Total
목록Paper (50)
My Vision, Computer Vision

BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language ModelsThe cost of vision-and-language pre-training has become increasingly prohibitive due to end-to-end training of large-scale models. This paper proposes BLIP-2, a generic and efficient pre-training strategy that bootstraps vision-language pre-training from oarxiv.org Problem대부분의 State-of-the-art V..

CLIPPO: Image-and-Language Understanding from Pixels OnlyMultimodal models are becoming increasingly effective, in part due to unified components, such as the Transformer architecture. However, multimodal models still often consist of many task- and modality-specific pieces and training procedures. For example,arxiv.orgProblem대부분의 멀티모달 모델은 모달리티 별로 구성 요소가 다르고, 다른 태스크에 적용하기 위해 그에 맞는 추가적인 학습 절차가 필요..

CLIP-Adapter: Better Vision-Language Models with Feature AdaptersLarge-scale contrastive vision-language pre-training has shown significant progress in visual representation learning. Unlike traditional visual systems trained by a fixed set of discrete labels, a new paradigm was introduced in \cite{radford2021learning}arxiv.orgProblem(문제 지적)CLIP은 클래스 이름을 프롬프트 템플릿에 넣어 Zero-shot transfer를 수행하는데, 이..

DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement LearningWe introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrates remarkable reasoninarxiv.orgAbstract본 논문에서는 DeepSeek-R1-Zero와 DeepSeek-R1이라는 모델..

BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and GenerationVision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has beearxiv.orgAbstractVision-Langauge Pre-t..

SLIP: Self-supervision meets Language-Image Pre-trainingRecent work has shown that self-supervised pre-training leads to improvements over supervised learning on challenging visual recognition tasks. CLIP, an exciting new approach to learning with language supervision, demonstrates promising performance on a wiarxiv.org Abstact본 연구에서는 Self-supervised learning(자기 지도 학습)이 Visual representation lea..