Session: No GPUs? No Problem! Training ML Models Without Expensive Hardware
In this session, I will explore how to train effective machine learning models without relying on expensive, constantly evolving GPU hardware. Instead of upgrading local machines every time a new GPU is released, which could get unreasonably expensive very quickly, and take a long time to set up, cloud platforms (e.g. GCP vertex AI, AWS Sagemaker, Bedrock etc.) offer scalable, managed environments where training can be offloaded reliably and cost-effectively.
I will walk through how hyperparameter tuning, experiment tracking, and model deployment can be streamlined using these platforms, and how standardized environments help keep results consistent across runs, something that is often difficult to achieve when training on different local setups or ad-hoc servers. The session will also cover practical steps for optimizing code, containerizing workflows, and orchestrating training through command-line tools or automated build triggers.
Whether you’re working with classical models or training large generative systems, this talk will show practical, platform-agnostic methods for making machine learning workflows more scalable, reproducible, and budget-friendly, without needing your own GPU farm.