2 Comments

Hi Alejandro, Aren't chatbots usually built on existing LLMs (or NLPs like DialogFlow), which are provided by big companies? In such cases, I don't think we (the chatbot builders) have access to the base model in the first place to fine-tune, right?

For example, I have assumed that GPT-4 can only be "fine-tuned" by OpenAI themselves.

So, when you suggest "training your own chatbot" do you mean chatbots that use our own models, which most companies don't have?

Or do you mean that this applies to smaller Open Source models, that we can host ourselves and fine tune?

Expand full comment

Good question. The answer is both. I only mentioned it briefly in the article but while you can fine tune small open source model in local infrastructure, also OpenAI Mistral, Fireworks, MonsterAPI and many other LLM providers will fine tune their models for you on the cloud if you give them your custom dataset. They'll charge you for the fine tuning process and then they'll host the fine-tuned model for you at a premium price so it's more expensive than using a base model through an API but still cheaper than having to do it all on your own infrastructure.

Expand full comment