It's really cool to see you interleave concrete examples to back some of the points you highlight (e.g. iterate on the data not the model).
First, you're exactly right to think like an org and focus on the team. Star students are celebrated in school, but you wouldn't want to be a hero in MLOps. Productive collaboration practices make the difference for whether small startups can scale-up and continue their growth --- or not. Big orgs like FAANG have figured this out and have a lot of institutional knowledge built-into their processes, toolchains, and practices.
Second, how we close the loop, integrating feedback into future rounds of training and Validation is the crux of MLOps. I think once you can streamline the feedback loop, you have a functioning operation.
Finally, when this paper was written, it was on the cusp of the ChatGPT revolution, and before the mass layoffs. Most things stay the same, but there are a couple important differences worth thinking about.
So much of my professional life I've been taught to assume people salaries are much more expensive than hardware, and to spend more on compute if it makes people more productive programmers. With the massive energy and water needs of generative AI, and the scarcity of NVIDIA's most advanced GPUs, that trend is starting to shift. We're starting to see hardware become more expensive. That could disrupt how we think about programmers and developer productivity. We could be returning to a time like the 50s when massive IBM mainframes justified hiring large teams of operators.
Excellent as always, Devansh. Really enjoyed the executive highlights and the highlighting on the graphic showing a section of a paper. I'll add to the experimentation bullet: it's important that experimentation is done thoughtfully because ML doesn't only require a ton of experimentation- that experimentation can also be very expensive.
Hi Devansh,
It's really cool to see you interleave concrete examples to back some of the points you highlight (e.g. iterate on the data not the model).
First, you're exactly right to think like an org and focus on the team. Star students are celebrated in school, but you wouldn't want to be a hero in MLOps. Productive collaboration practices make the difference for whether small startups can scale-up and continue their growth --- or not. Big orgs like FAANG have figured this out and have a lot of institutional knowledge built-into their processes, toolchains, and practices.
Second, how we close the loop, integrating feedback into future rounds of training and Validation is the crux of MLOps. I think once you can streamline the feedback loop, you have a functioning operation.
Finally, when this paper was written, it was on the cusp of the ChatGPT revolution, and before the mass layoffs. Most things stay the same, but there are a couple important differences worth thinking about.
So much of my professional life I've been taught to assume people salaries are much more expensive than hardware, and to spend more on compute if it makes people more productive programmers. With the massive energy and water needs of generative AI, and the scarcity of NVIDIA's most advanced GPUs, that trend is starting to shift. We're starting to see hardware become more expensive. That could disrupt how we think about programmers and developer productivity. We could be returning to a time like the 50s when massive IBM mainframes justified hiring large teams of operators.
Great work!
It means a lot coming from one of the authors. That's an interesting point you mentioned. Would be interesting to see how tends shift.
PS: if you have some time, would love to get your opinion on part 2 (which is also built off your paper).
https://artificialintelligencemadesimple.substack.com/p/what-are-the-biggest-challenges-in
Excellent as always, Devansh. Really enjoyed the executive highlights and the highlighting on the graphic showing a section of a paper. I'll add to the experimentation bullet: it's important that experimentation is done thoughtfully because ML doesn't only require a ton of experimentation- that experimentation can also be very expensive.
Great add. You should do an in-depth look into How to run good experiments in MLE
That’s good idea. I’d have to dig more into that before I did, though.