Contrastive Language-Image Pre-training
| CLIP | |
|---|---|
| Developer(s) | OpenAI |
| Initial release | January 5, 2021 |
| Repository | https://github.com/OpenAI/CLIP |
| Written in | Python |
| License | MIT License |
| Website | openai.com/research/clip |
Contrastive Language-Image Pre-training (CLIP) is a technique for training a pair of neural network models, one for image understanding and one for text understanding, using a contrastive objective. This method has enabled broad applications across multiple domains, including cross-modal retrieval, text-to-image generation, and aesthetic ranking.