
Summary("Lookahead-loss") During encoder training for models like IP-adaptor, they use single-pass generation models such as the Consistency model to generate noisy images, which are then compared with the reference image to compute an additional loss (LCM-lookahead loss).However, the LCM model can end up optimizing the loss regardless of the input (z_rt below), which could break the alignment b..

Link , Personalized text-to-image generation by Meta AIInsightsThey create “architecture tailored for personalization image generation”they design kind of improved IP-adaptor for “any-personality” generation.therefore, the model does not need to be trained for a new subject, unlike LoRA or Dreambooth.meanwhile, other “any-personality generation models” could come with a strong over-fitting behav..

NeurIPS 2023, Arxiv Link What they didThey propose "Battle-of-the-Backones"(BoB), benchmarking a diverse suite of (vision) pre-trained models, including classic ImageNet-trained CNN, vision-language models (perhaps CLIP). The target tasks are OOD-generalization, classification, object-detection and Image retrieval.Key observationswith more than 1,500 training runs, they found that CNN pretrained..

ACL 2023. 이긴 하지만 아카이브에는 2022에 올라왔음.Super-simple backgroundRAG (Retrieval Augmented Generation) is commonly used to complement the hallucination of LLMs.to find the proper documents (here, we call target document) for given queries, the "contriever" is used.the contriever can be an text-encoder model such as T5 or BERT.the target document can be searced with the encoded feature by the contriever ..

Here I introduce the generalization error bound of the Domain generalization problem, which is the test domain—or style, sometimes—differs from the training domain.PreliminariesNotationsX∈X⊂Rd,Y∈Y⊂R : Common input and target spacePiXY: Data distribution of the i'th domainSi∼PiXY: Samples for the i'th domain$\epsilo..
- Total
- Today
- Yesterday
- vscode
- loss-weighting
- icml2024
- domaingeneralization
- Transformer
- flowmatching
- 프렌밀리
- Theme
- finetuning
- 이문설농탕
- multiheadattention
- deeplearning4science
- 몽중식
- ICML
- generativemodeling
- DeepLearning
- LLM
- diffusion
- MachineLearning
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | ||
6 | 7 | 8 | 9 | 10 | 11 | 12 |
13 | 14 | 15 | 16 | 17 | 18 | 19 |
20 | 21 | 22 | 23 | 24 | 25 | 26 |
27 | 28 | 29 | 30 |