Reminder: IEEE SSCS / CS event: Machine Learning - Analysis of Total Variance

Transforming data is crucial today as every business activity from private to public, from hospitals to mega mart benefits from this. However, due to the explosive volume of data, it is becoming almost impossible to decipher the data manually, as we are creating 2.5 quintillion bytes per day (2022), and nearly 90% of the total data has been created in the last couple years.  Data mining is performed with the help of machine learning tools to analyze and understand the data. Data Mining and Machine Learning are heavily dependent on statistical tools and techniques. Therefore, we sometimes use the term “statistical learning" in terms of techniques as a continuous process as no model is perfect. 

Machine learning (ML), the driving force behind most AI systems, trains models by minimizing prediction errors on a given dataset. Federated learning (FL) extends this paradigm to networks of distributed ML tasks, where each task involves a separate model and dataset. Just as empirical risk minimization is a foundational principle in ML, total variation (TV) minimization can provide a unifying framework for FL. This talk explores the mathematical structure of TV minimization and its role in designing trustworthy AI. We demonstrate how carefully chosen components of TV minimization lead to AI services that are robust, privacy-preserving, and explainable.

On 19 February @ 9AM, Alexander Jung, Associate Professor from Aalto University in Finland will discuss Total Variance.

(see attached flyer)


Attachments: