Machine learning is now widely used in different applications. In some cases, it is sufficient to generate batch results using machine learning models in an offline manner. However, in other cases, models must be deployed online in a production environment, such that end users or other system components can benefit from the real-time outputs of these models. Serving machine learning models involves mostly engineering challenges, including designing the interface, optimizing the time required to generate predictions and the computing resources required to run the models, etc. In this talk, I will discuss different ways of serving machine learning models in Python, and introduce several useful Python packages that would make deploying machine learning models much easier. I will also share some experience in deploying different kinds of machine learning models.
Buzzwords: machine learning, deployment, multithreading, multiprocessing, web applications, network programming
Level: Intermediate: Target audiences with intermediate experience in python programming
Requirements to Audiences: Basic understanding of machine learning
Speaker: Albert Au Yeung (Hong Kong)
Speaker Bio: Albert is currently a machine learning lead engineer at Zwoop, a e-commerce startup in Hong Kong. He was involved in various projects in data mining and machine learning while being a researcher at Huawei’s Noah’s Ark Lab, ASTRI, and NTT Communication Science Laboratories in Japan. He has a PhD in Computer Science from the University of Southampton. Albert has been programming in Python since 2004. He is also a part-time lecturer at the Chinese University of Hong Kong, teaching courses related to Python, machine learning and network programming.