vLLM is at the moment one of many quickest inference engines for big language fashions (LLMs). It helps a variety of mannequin architectures and quantization strategies.
vLLM additionally helps vision-language fashions (VLMs) with multimodal inputs containing each photos and textual content prompts. As an example, vLLM can now serve fashions like Phi-3.5 Imaginative and prescient and Pixtral, which excel at duties similar to picture captioning, optical character recognition (OCR), and visible query answering (VQA).
On this article, I’ll present you methods to use VLMs with vLLM, specializing in key parameters that affect reminiscence consumption. We are going to see why VLMs eat far more reminiscence than normal LLMs. We’ll use Phi-3.5 Imaginative and prescient and Pixtral as case research for a multimodal utility that processes prompts containing textual content and pictures.
The code for operating Phi-3.5 Imaginative and prescient and Pixtral with vLLM is supplied on this pocket book:
In transformer fashions, producing textual content token by token is gradual as a result of every prediction will depend on all earlier tokens…