Questions regarding the LLM search interface project

Anamay Narkar anamay.narkar.102 at gmail.com
Thu Mar 19 12:29:47 GMT 2026


Hello,
I'm interested in the "Interface the database search engine to an AI
based LLM" project mentioned in the ideas list for GSOC 2026

A few questions before I draft the proposal:
- For running the model locally, I was planning to use llama.cpp. Is
that acceptable, or do you prefer integrating with the existing OpenCV
system that digiKam already uses for facial recognition and grouping?
- Is the main goal to map natural language phrases to params for the
existing search API, or is it to embed the photos for semantic
similarity search, or both?
- For vector storage if semantic search is in scope - would you prefer
extending the existing SQLite database with an extension like
sqlite-vec, or a separate store like qdrant?
- For machines that can't run a local model because of hardware
constraints, should the feature not be available, or should it be
strictly opt-in regardless of the hardware?
- digiKam already prompts users to download models for face
recognition at startup. Im assuming the LLM model download should
follow the same pattern?

Thanks,
Anamay Narkar
anamay.narkar.102 at gmail.com
Gitlab: https://invent.kde.org/anamaynarkar


More information about the Digikam-devel mailing list