What do you think of use cases where LLM should talk domain specific languages with high accuracy e.g Telecom or Medical science or similar? Do you think RAG based generalist LLMs could solve those use cases? Wouldn’t a SLM with domain knowledge (fine tuned LLM)be better off?
Is this view (fine tuning is waste of time) is generic? Or there are certain use cases where fine tuning has major advantages?
I don't think it is officially documented but I always assumed that the Fine-tuning API's offered by OpenAI and Google were using LoRA behind the scenes anyway. Fine-tuning of all parameters would be horribly expensive.
Interesting, thanks for share
Of course. Make sure you share it with the fine tuning tards
Good read. Thanks for sharing views.
What do you think of use cases where LLM should talk domain specific languages with high accuracy e.g Telecom or Medical science or similar? Do you think RAG based generalist LLMs could solve those use cases? Wouldn’t a SLM with domain knowledge (fine tuned LLM)be better off?
Is this view (fine tuning is waste of time) is generic? Or there are certain use cases where fine tuning has major advantages?
I don't think it is officially documented but I always assumed that the Fine-tuning API's offered by OpenAI and Google were using LoRA behind the scenes anyway. Fine-tuning of all parameters would be horribly expensive.
I disagree with fine-tuning LLMs being a waste of time. It is a good use of time if it is done for the appropriate reasons.