Contrastive Language-Image Pre-training

CLIP
Developer(s)OpenAI
Initial releaseJanuary 5, 2021
Repositoryhttps://github.com/OpenAI/CLIP
Written inPython
LicenseMIT License
Websiteopenai.com/research/clip

Contrastive Language-Image Pre-training (CLIP) is a technique for training a pair of neural network models, one for image understanding and one for text understanding, using a contrastive objective. This method has enabled broad applications across multiple domains, including cross-modal retrieval, text-to-image generation, and aesthetic ranking.