Why do people host LLMs at home when processing the same amount of data from the internet to train their LLM will never be even a little bit as efficient as sending a paid prompt to some high quality official model?

inb4 privacy concerns or a proof of concept

this is out of discussion, I want someone to prove his LLM can be as insightful and accurate as paid one. I don’t care about anything else than quality of generated answers

  • borokov@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    12 hours ago

    Poor internet connection/no internet at all, network latency too high for their needs, specific fine tuned LLM ?

    Off course, main reason is privacy. My company host its own GPT4 chatbot, and forbid us to use public ones. But I suppose there are other legit use case to host its own LLM.