Model is not inferencing on multiple images; is this the right template?

#4
by ltbd78 - opened
conversation = [
    {
        "role": "User",
        "content": "Compare and contrast <image_placeholder> and <image_placeholder>.",
        "images": ["./data/1.png", "./data/2.jpg"]
    },
    {
        "role": "Assistant",
        "content": ""
    }
]

UPDATE: it worked for a different set of images and prompt "Describe <image_placeholder>. Then describe <image_placeholder>."
Though, I would like to clarify: how is order determined? Is it sequential?

DeepSeek org
edited Mar 18

Yes, for the multi-image inputs, your prompt is correct. Additionally, these images are sequential. The first <image_placeholder> corresponds to "./data/1.png," while the second <image_placeholder> corresponds to "./data/2.png."

This comment has been hidden

@doubility123 For me, multi image input does not work at all. Was deepseek VL trained on multi image-text pairs?

For our model, we need to train on multi image-text pairs, does the architecture support that?

Thanks

Sign up or log in to comment