반응형
Notice
Recent Posts
Recent Comments
Link
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | ||||
4 | 5 | 6 | 7 | 8 | 9 | 10 |
11 | 12 | 13 | 14 | 15 | 16 | 17 |
18 | 19 | 20 | 21 | 22 | 23 | 24 |
25 | 26 | 27 | 28 | 29 | 30 | 31 |
Tags
- grefcoco dataset
- 기계학습
- 객체 검출
- 논문 리뷰
- gsoc midterm evaluations
- mobilenetv1
- 원격 학습 안끊기게
- referring expression segmentation
- res paper
- gres
- grefcoco
- blip-2
- 엔트로피란
- 1차 미분 마스크
- 이미지 필터링
- 딥러닝 엔트로피
- vlm
- 논문 요약
- clip
- 대학원 일상
- 2호선 따라걷기
- 2호선 완주
- 에지 검출
- clip adapter
- 딥러닝 목적함수
- gsoc 후기
- gsoc 2025
- Object detection article
- object detection
- res
Archives
- Today
- Total
목록2025/05/16 (1)
My Vision, Computer Vision

ALBERT: A Lite BERT for Self-supervised Learning of Language RepresentationsIncreasing model size when pretraining natural language representations often results in improved performance on downstream tasks. However, at some point further model increases become harder due to GPU/TPU memory limitations and longer training times. Toarxiv.orgAuthor : Lan, Zhenzhong, et al.Journal : ICLR 2020Keyword ..
Paper
2025. 5. 16. 10:34