Join open MLCommons task force on automation and reproducibility to participate in the collaborative development of the Collective Knowledge v3 playground (MLCommons CK) powered by the Collective Mind automation language (MLCommons CM) and Modular Inference Library (MIL) to automate benchmarking, optimization, design space exploration and deployment of Pareto-efficient AI/ML applications across any software and hardware from the cloud to the edge!

The latest news:

2024 March

Grigori Fursin and Arjun Suresh will present our new project to "Automatically Compose High-Performance and Cost-Efficient AI Systems with MLCommons' Collective Mind and MLPerf" at the MLPerf-Bench workshop @ HPCA'24.

2023 December

We thank MLCommons for trusting and adopting our open-source Collective Knowledge and Collective Mind technology to automate benchmarking and optimization of AI systems. We are now working for MLCommons to extend our CM automation workflows and help all MLCommons members run MLPerf inference benchmarks and submit v4.0 results in a unified way across any software/hardware stacks while getting rid of all manual and tedious steps.

2023 September

We are proud to help the cTuning foundation and MLCommons deliver the new version of the Collective Knowledge Technology v3 with the open-source MLCommons CM automation language, CK playground and modular inference library (MIL) that became the 1st and only workflow automation enabling mass submission of more than 12000 performance results in a single MLPerf inference submission round with more than 1900 power results across more than 120 different system configurations from different vendors (different implementations, all reference models and support for DeepSparse Zoo, Hugging Face Hub and BERT pruners from the NeurIPS paper, main frameworks and diverse software/hardware stacks) in both open and closed divisions!

This remarkable achievement became possible thanks to open and transparent development of this technology as an official MLCommons project with public Discord discussions, important feedback from Neural Magic, TTA, One Stop Systems, Nutanix, Collabora, Deelvin, AMD and NVIDIA, and contributions from students, researchers and even school children from all over the world via our public MLPerf challenges. Special thanks to One Stop Systems for showcasing the 1st MLPerf results on Rigel Edge Supercomputer, and to TTA for sharing their platforms with us to add CM automation for DLRMv2 available to everyone.

Since it’s impossible to describe all the compelling performance and power-efficient results achieved by our collaborators in a short press-release, we make them available with various derived metrics at the Collective Knowledge playground, mlcommons@ck_mlperf_results and this news page. We continue enhancing the MLCommons CM/CK technology to help everyone automatically co-design the most efficient end-to-end AI solutions based on their requirements and constraints. We welcome all submitters to follow our CK/CM automation developments at GitHub and join our public Discord server if you want to automate your future MLPerf submissions at scale.

See related HPC Wire article about cTuning and our CM/CK technology, and contact Grigori Fursin and Arjun Suresh for more details!