일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | |||||
3 | 4 | 5 | 6 | 7 | 8 | 9 |
10 | 11 | 12 | 13 | 14 | 15 | 16 |
17 | 18 | 19 | 20 | 21 | 22 | 23 |
24 | 25 | 26 | 27 | 28 | 29 | 30 |
31 |
- 이미지 필터링
- gres
- 논문 리뷰
- gsoc 2025
- gsoc 후기
- Object detection article
- grefcoco
- grefcoco dataset
- res paper
- 1차 미분 마스크
- 2호선 완주
- blip-2
- clip
- 논문 요약
- vlm
- res
- 딥러닝 엔트로피
- referring expression segmentation
- 엔트로피란
- 딥러닝 목적함수
- 2호선 따라걷기
- gsoc midterm evaluations
- mobilenetv1
- 기계학습
- 객체 검출
- 대학원 일상
- clip adapter
- 원격 학습 안끊기게
- object detection
- 에지 검출
- Today
- Total
목록Paper (50)
My Vision, Computer Vision

DoRA: Weight-Decomposed Low-Rank AdaptationAmong the widely used parameter-efficient fine-tuning (PEFT) methods, LoRA and its variants have gained considerable popularity because of avoiding additional inference costs. However, there still often exists an accuracy gap between these methods and fullarxiv.org Author : Liu, Shih-Yang, et al.Journal : ICML 2024Keyword : DoRAPublished Date : 2024년 2월..

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate ShiftTraining Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful paramarxiv.org Author : Ioffe, Sergey, and Christian Sz..

LoRA: Low-Rank Adaptation of Large Language ModelsAn important paradigm of natural language processing consists of large-scale pre-training on general domain data and adaptation to particular tasks or domains. As we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes learxiv.orgJournal : ICLR 2022Published Date : 2021년 6월 17일Keyword : LLM, RANK Abstract모델의 크기가..

Data-Efficient Multimodal Fusion on a Single GPUThe goal of multimodal alignment is to learn a single latent space that is shared between multimodal inputs. The most powerful models in this space have been trained using massive datasets of paired inputs and large-scale computational resources, making tharxiv.orgJournal: CVPR 20204Published Date: 2023년 12월 15일Keyword: Single GPU, Vision Language ..

TinyLLaVA: A Framework of Small-scale Large Multimodal ModelsWe present the TinyLLaVA framework that provides a unified perspective in designing and analyzing the small-scale Large Multimodal Models (LMMs). We empirically study the effects of different vision encoders, connection modules, language models, training darxiv.orgJournal: ArxivPublished Date: 2024년 2월 22일본 논문은 TinyLLaVA 프레임워크를 소개한다.또한..

SPICE: Semantic Propositional Image Caption EvaluationThere is considerable interest in the task of automatically generating image captions. However, evaluation is challenging. Existing automatic evaluation metrics are primarily sensitive to n-gram overlap, which is neither necessary nor sufficient for the taarxiv.orgJournal : ECCV 2016Published Date : 2016년 9월 16일keyword : Evaluation Metric, SP..