DSC Webinar Series: State of the Art Deep Learning on Apache Spark™

Big data and AI are joined at the hip: the best AI applications require massive amounts of constantly updated training data to build state-of-the-art models. Increasingly more Spark users want to integrate Spark with distributed machine learning frameworks built for state-of-the-art training. Here’s the problem: big data frameworks like Spark and distributed deep learning frameworks don’t play well together due to the disparity between how big data jobs are executed and how deep learning jobs are executed.
Back to Top