분류 전체보기27 [python] 대문자는 소문자로 소문자는 대문자로 python에서 나이스한 함수swapcase()를 쓰면대문자는 소문자로 소문자는 대문자로 바꿀 수 있다. 다만 파이썬이 아니라면 어떤 알고리즘?str = input()for i in str: if i.isupper() == True: print(i.lower(), end="") else: print(i.upper(), end="") 2024. 11. 8. [paper review] (VPT) Visual Prompt Tuning Visual Prompt Tuning - ECCV 2022https://arxiv.org/abs/2203.12119 Visual Prompt TuningThe current modus operandi in adapting pre-trained models involves updating all the backbone parameters, ie, full fine-tuning. This paper introduces Visual Prompt Tuning (VPT) as an efficient and effective alternative to full fine-tuning for large-scale Trarxiv.org1. Prompt TuingPrompting- The promising approach.. 2024. 7. 21. [논문 리뷰] (SHIP) Improving Zero-Shot Generalization for CLIP with Synthesized Prompts Improving Zero-Shot Generalization for CLIP with Synthesized Prompts - ICCV 2023이번에는 Co-CoOp의 방식을 차용한 논문에 대해 리뷰하도록 하겠습니다.https://arxiv.org/abs/2307.07397 Improving Zero-Shot Generalization for CLIP with Synthesized PromptsWith the growing interest in pretrained vision-language models like CLIP, recent research has focused on adapting these models to downstream tasks. Despite achieving promising .. 2024. 7. 21. [논문 리뷰] (Co-CoOp) Conditional Prompt Learning for Vision-Language Models Conditional Prompt Learning for Vision-Language Models - CVPR 2022 https://arxiv.org/abs/2203.05557v2 Conditional Prompt Learning for Vision-Language ModelsWith the rise of powerful pre-trained vision-language models like CLIP, it becomes essential to investigate ways to adapt these models to downstream datasets. A recently proposed method named Context Optimization (CoOp) introduces the concept.. 2024. 7. 20. [논문 리뷰] (CoOp) Learning to Prompt for Vision-Language Models - IJCV 2022 Learning to Prompt for Vision-Language Models - IJCV 2022https://arxiv.org/abs/2109.01134 Learning to Prompt for Vision-Language ModelsLarge pre-trained vision-language models like CLIP have shown great potential in learning representations that are transferable across a wide range of downstream tasks. Different from the traditional representation learning that is based mostly on discretiarxiv.o.. 2024. 7. 20. [Cloud Computing] 16. Cloud Storage Cloud Storage 스토리지의 목적:저장 : 단기간보관 : 장기간local SSD > redundency사 안됨리전 : 여러개의 존이 있는데 멀티리저널 :클라우드를 분산해서 저장하는 곳 Nearline : 한달에 한번롱테일 미디어 : 옛날영화 잘 안보는거콜드라인: 1년에 한번 미만 접근 백업 정기점검, 감사 ARCHIVE 아카이빙: 장기적으로 보관하는 데이터보관 방식의 차이 STANDARD : 자주쓰는거. 보관비용은 비싸지만 입출력비용 저렴 COLDLINE: ARCHIVE: 법, 규정에 따라 의무 보관데이터, 보관비용 스토리지 코스트는 낮지만 꺼낼때 입출력비용 (다운로드) 큼 Multi - Regional Storage : OTT서비스Regional Storage: 동일한 지역에서만 접속Nearli.. 2024. 7. 17. 이전 1 2 3 4 5 다음