5 Comments
User's avatar
Carlos Guadián's avatar

Interesting, thanks for share

Expand full comment
Devansh's avatar

Of course. Make sure you share it with the fine tuning tards

Expand full comment
Atul Deshpande's avatar

Good read. Thanks for sharing views.

What do you think of use cases where LLM should talk domain specific languages with high accuracy e.g Telecom or Medical science or similar? Do you think RAG based generalist LLMs could solve those use cases? Wouldn’t a SLM with domain knowledge (fine tuned LLM)be better off?

Is this view (fine tuning is waste of time) is generic? Or there are certain use cases where fine tuning has major advantages?

Expand full comment
EarlyGray's avatar

I don't think it is officially documented but I always assumed that the Fine-tuning API's offered by OpenAI and Google were using LoRA behind the scenes anyway. Fine-tuning of all parameters would be horribly expensive.

Expand full comment
Paul Lewallen's avatar

I disagree with fine-tuning LLMs being a waste of time. It is a good use of time if it is done for the appropriate reasons.

Expand full comment