반응형
250x250
Notice
Recent Posts
Recent Comments
Link
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | ||
6 | 7 | 8 | 9 | 10 | 11 | 12 |
13 | 14 | 15 | 16 | 17 | 18 | 19 |
20 | 21 | 22 | 23 | 24 | 25 | 26 |
27 | 28 | 29 | 30 |
Tags
- vlm 환각
- vlm
- clip adapter
- evaluating object hallucination in large vision-language models 논문
- 딥러닝 목적함수
- polling-based object probing evaluation
- dinov2 논문 리뷰
- 원격 학습 안끊기게
- 딥러닝 엔트로피
- 논문 요약
- evaluating object hallucination in large vision-language models paper
- 객체 검출
- vlm hallucination
- dinov2: learning robust visual features without supervision 논문 리뷰
- evaluating object hallucination in large vision-language models
- vlm 환각이란
- mobilenetv1
- clip
- dinov2: learning robust visual features without supervision 논문
- 기계학습
- 1차 미분 마스크
- 에지 검출
- dinov2: learning robust visual features without supervision
- object detection
- vlm hallucination paper
- blip-2
- Object detection article
- 논문 리뷰
- 이미지 필터링
- 엔트로피란
Archives
- Today
- Total
목록lora dora (1)
My Vision, Computer Vision

DoRA: Weight-Decomposed Low-Rank AdaptationAmong the widely used parameter-efficient fine-tuning (PEFT) methods, LoRA and its variants have gained considerable popularity because of avoiding additional inference costs. However, there still often exists an accuracy gap between these methods and fullarxiv.org Author : Liu, Shih-Yang, et al.Journal : ICML 2024Keyword : DoRAPublished Date : 2024년 2월..
Paper
2025. 3. 18. 16:46