Join open MLCommons task force on automation and reproducibility to participate in the collaborative development of the Collective Knowledge v3 playground (MLCommons CK) powered by the Collective Mind automation language (MLCommons CM) and Modular Inference Library (MIL) to automate benchmarking, optimization, design space exploration and deployment of Pareto-efficient AI/ML applications across any software and hardware from the cloud to the edge!
Windows/Linux/MacOS/Android: 
The Collective Knowledge framework (CK) helps to organize any software project as a database of reusable components (algorithms, datasets, models, frameworks, scripts, experimental results, papers, etc) with common automation actions and extensible meta descriptions based on FAIR principles (findability, accessibility, interoperability and reusability). The goal is to make it easier for researchers, practitioners and students to reproduce, compare and build upon techniques from published papers shared in the common CK format, adopt them in production and reuse best R&D practices.
See how the CK technology helps to automate benchmarking, optimization and design space exploration of ML Systems and accelerate AI, ML and System innovation: journal article, ACM tech talk, cKnowledge.io portal, and some real-world use cases from MLPerf, General Motors, Arm, IBM, Amazon, Qualcomm, DELL, the Raspberry Pi foundation and ACM.
Presentations about CK (ACM, General Motors and FOSDEM)



Developing innovative technology (AI, ML, quantum, IoT), and deploying it in the real world is a very painful, ad-hoc, time consuming and expensive process due to continuously evolving software, hardware, models, data sets and research techniques. After struggling with these problems for many years, we started developing the Collective Knowledge framework (CK) to decompose complex research projects into reusable automation actions and components with unified APIs, CLI and JSON meta descriptions. These CK components help to abstract software, hardware, models, data sets, scripts, results and can be connected into portable CK workflows while applying powerful DevOps principles. Such workflows use a portable meta package manager with a list all dependencies to automatically adapt to a given platform (detect hardware, software and environment, install missing packages and build/optimize code).
We have spent the last few years testing CK with our great academic and industrial partners as a playground to implement and share reusable automation actions and components typical in AI, ML and systems R&D while agreeing on common APIs and JSON meta description. We used such components to assemble portable workflows from reproduced research papers during the artifact evaluation process that we have helped to arrange at different ML&systems conferences including ASPLOS, CGO, PPoPP and MLSys. We then demonstrated that it was possible to use such portable workflows to automate design space exploration of AI/ML/SW/HW stacks, automate MLPerf benchmark submissions, and simplify deployment of ML Systems in production in the most efficient way (speed, accuracy, energy, costs) across diverse platforms from data centers to edge devices.
CK framework is basically a common playground that connects researchers and practitioners to learn how to collaboratively design, benchmark, optimize and validate innovative computational technology including self-optimizing and bio-inspired computing systems. Our mission is to organize and systematize all our AI, ML and systems knowledge in the form of portable workflow, automation actions and reusable artifacts using our open CK platform with reproducible papers and live SOTA scoreboards for crowd-benchmarking. We continue using CK to support related initiatives including MLPerf, PapersWithCode, ACM artifact review and badging, and artfact evaluation.

CK concept is to bring DevOps principles to computational research while abstracting, unifying and connecting popular tools and services instead of substituting them