일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | ||
6 | 7 | 8 | 9 | 10 | 11 | 12 |
13 | 14 | 15 | 16 | 17 | 18 | 19 |
20 | 21 | 22 | 23 | 24 | 25 | 26 |
27 | 28 | 29 | 30 |
- 원격 학습 안끊기게
- evaluating object hallucination in large vision-language models paper
- vlm 환각
- 에지 검출
- object detection
- mobilenetv1
- dinov2: learning robust visual features without supervision
- 1차 미분 마스크
- blip-2
- 논문 리뷰
- 기계학습
- 이미지 필터링
- evaluating object hallucination in large vision-language models
- dinov2: learning robust visual features without supervision 논문 리뷰
- 엔트로피란
- vlm hallucination paper
- dinov2: learning robust visual features without supervision 논문
- vlm hallucination
- 딥러닝 목적함수
- evaluating object hallucination in large vision-language models 논문
- 딥러닝 엔트로피
- 논문 요약
- clip
- clip adapter
- 객체 검출
- vlm
- dinov2 논문 리뷰
- vlm 환각이란
- Object detection article
- polling-based object probing evaluation
- Today
- Total
목록clip adapter (2)
My Vision, Computer Vision

Learning to Prompt for Vision-Language ModelsLarge pre-trained vision-language models like CLIP have shown great potential in learning representations that are transferable across a wide range of downstream tasks. Different from the traditional representation learning that is based mostly on discretiarxiv.org발행일 : 2021년 9월 2일저널/학회 : SPRINGER 2022ProblemCLIP과 같은 기존 VLM은 프롬프트 엔지니어링을 통한 Zero-shot t..

CLIP-Adapter: Better Vision-Language Models with Feature AdaptersLarge-scale contrastive vision-language pre-training has shown significant progress in visual representation learning. Unlike traditional visual systems trained by a fixed set of discrete labels, a new paradigm was introduced in \cite{radford2021learning}arxiv.orgProblem(문제 지적)CLIP은 클래스 이름을 프롬프트 템플릿에 넣어 Zero-shot transfer를 수행하는데, 이..