Machine Learning Everywhere feat. Pete Warden | Stanford MLSys Seminar Episode 31

Episode 31 of the Stanford MLSys Seminar Series! Machine Learning Everywhere Speaker: Pete Warden Link to slides: Abstract: When I first joined Google in 2014, I was amazed to discover they were using 13 kilobyte neural network models to recognize “OK Google“ on tiny embedded chips on Android phones. This felt like deep magic, and it made me wonder how many other problems these kinds of miniscule ML models could solve. Over the past few years I’ve been helping Google ship products using this approach with TensorFlow Lite Micro, and helped external developers create new applications. While it’s still early days for “TinyML“, we’re already seeing interesting impacts on how engineers compose systems, including software-defined sensors, cascades of ML models, air-gapped ambient computing, and ubiquitous on-device voice interfaces. In this talk I’ll cover the past, present, and future of embedded ML systems. -- The Stanford MLSys Seminar is hosted by Dan Fu, Karan Goel, Fiodar Kazhamiaka,
Back to Top