We are proud to help the cTuning foundation and MLCommons deliver the new version of the Collective Knowledge Technology v3 with the open-source MLCommons CM automation language, CK playground and modular inference library (MIL) that became the 1st and only workflow automation enabling mass submission of more than 12000 performance results in a single MLPerf inference submission round with more than 1900 power results across more than 120 different system configurations from different vendors (different implementations, all reference models and support for DeepSparse Zoo, Hugging Face Hub and BERT pruners from the NeurIPS paper, main frameworks and diverse software/hardware stacks) in both open and closed divisions!
This remarkable achievement became possible thanks to open and transparent development of this technology as an official MLCommons project with public Discord discussions, important feedback from Neural Magic, TTA, One Stop Systems, Nutanix, Collabora, Deelvin, AMD and NVIDIA, and contributions from students, researchers and even school children from all over the world via our public MLPerf challenges. Special thanks to One Stop Systems for showcasing the 1st MLPerf results on Rigel Edge Supercomputer, and to TTA for sharing their platforms with us to add CM automation for DLRMv2 available to everyone.
Since it’s impossible to describe all the compelling performance and power-efficient results achieved by our collaborators in a short press-release, we make them available with various derived metrics at the Collective Knowledge playground, mlcommons@ck_mlperf_results and this news page. We continue enhancing the MLCommons CM/CK technology to help everyone automatically co-design the most efficient end-to-end AI solutions based on their requirements and constraints. We welcome all submitters to follow our CK/CM automation developments at GitHub and join our public Discord server if you want to automate your future MLPerf submissions at scale.
See related HPC Wire article about cTuning and our CM/CK technology, and contact Grigori Fursin and Arjun Suresh for more details!