In conjunction with the 26th ACM/IFIP International Middleware Conference
15th – 19th December 2025
Vanderbilt University Nashville, TN, USA
Welcome to the 1st International workshop on Next-Gen Middleware for MLOps in Distributed Systems (MIND). MIND will be hosted in conjunction with the 26th ACM/IFIP International Middleware Conference conference, which will be held in Vanderbilt University, Nashville, TN, USA from 15th – 19th December 2025.
This workshop aims to bring together researchers, practitioners, and industry stakeholders to explore middleware innovations that support MLOps in distributed systems. It will focus on practical solutions to real-world challenges in orchestrating end to end ML pipelines, from data collection to model deployment and continuous monitoring in dynamic, heterogeneous, and resource constrained environments.
Machine Learning Operations (MLOps) is a set of practices that combines machine learning, DevOps, and data engineering to streamline the end to end lifecycle of ML systems. It covers the full spectrum of activities involved in operationalizing ML, such as data collection, preprocessing, model training, validation, deployment, monitoring, and continuous improvement, ensuring scalability, reliability, and efficiency in production environments.
However, challenges such as large-scale data transfer to cloud, limited bandwidth, latency, and privacy concerns have increased the need for effective MLOps in distributed systems. In these settings, ML workflows must be managed and automated across a combination of cloud, fog, and edge environments. Implementing MLOps in such hybrid infrastructures requires addressing the inherent complexity of distributed pipelines while maintaining system performance, model reliability, and data security.
To meet these demands, middleware has emerged as a critical enabler. Middleware provides a layer of abstraction and coordination that simplifies the deployment, monitoring, and management of ML models across distributed components. It handles resource discovery, workload scheduling, communication management, and fault tolerance, while also integrating with tools for version control, experiment tracking, and model retraining.
We invite submissions on a wide range of topics including, but not limited to:
*All deadlines are Anywhere on Earth (AoE).
Authors are invited to submit original and unpublished work, which must not be submitted concurrently for publication elsewhere, in the following format:
The page length includes figures, tables, appendices, and references. The font size has to be set to 9pt. Papers exceeding this page limit or with smaller fonts will be desk-rejected without review. Submitted papers must adhere to the formatting instructions of the ACM SIGCONF style, which can be found on the ACM template page.
For each accepted paper, at least one author is required to register and attend the workshop in-person to present their poster/paper on-site. The Middleware 2025 conference proceedings will be published in the ACM Digital Library.
Submission Portal: Submit papers via HotCRP.
For any inquiries regarding the workshop, please contact MIND chairs at: