반응형
250x250
Notice
Recent Posts
Recent Comments
Link
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | ||
6 | 7 | 8 | 9 | 10 | 11 | 12 |
13 | 14 | 15 | 16 | 17 | 18 | 19 |
20 | 21 | 22 | 23 | 24 | 25 | 26 |
27 | 28 | 29 | 30 |
Tags
- dinov2: learning robust visual features without supervision 논문 리뷰
- clip adapter
- 딥러닝 목적함수
- 에지 검출
- dinov2 논문 리뷰
- clip
- vlm 환각
- 논문 리뷰
- evaluating object hallucination in large vision-language models
- dinov2: learning robust visual features without supervision 논문
- blip-2
- mobilenetv1
- vlm 환각이란
- 이미지 필터링
- vlm hallucination paper
- 딥러닝 엔트로피
- vlm hallucination
- 1차 미분 마스크
- object detection
- 엔트로피란
- 객체 검출
- 기계학습
- 논문 요약
- evaluating object hallucination in large vision-language models 논문
- dinov2: learning robust visual features without supervision
- evaluating object hallucination in large vision-language models paper
- polling-based object probing evaluation
- 원격 학습 안끊기게
- vlm
- Object detection article
Archives
- Today
- Total
목록move-kd: knowledge distillation for vlms with mixture of visual encoders 논문 리뷰 (1)
My Vision, Computer Vision

MoVE-KD: Knowledge Distillation for VLMs with Mixture of Visual EncodersVisual encoders are fundamental components in vision-language models (VLMs), each showcasing unique strengths derived from various pre-trained visual foundation models. To leverage the various capabilities of these encoders, recent studies incorporate multarxiv.orgAuthor : Cao, Jiajun, et al.Journal : ArxivKeyword : Knowledg..
Paper
2025. 3. 31. 14:35