Join open MLCommons task force on automation and reproducibility to participate in the collaborative development of the Collective Knowledge v3 playground (MLCommons CK) powered by the Collective Mind automation language (MLCommons CM) and Modular Inference Library (MIL) to automate benchmarking, optimization, design space exploration and deployment of Pareto-efficient AI/ML applications across any software and hardware from the cloud to the edge!
[ Public workgroup, Project overview, ACM TechTalk, Reddit disccusion, News ]
We thank CK users and contributors for their encouragement, feedback and support:
MLCommons.org (MLPerf benchmark) Qualcomm cKnowledge ACM, the Raspberry Pi foundation, and ML&systems conferences General Motors Arm Linaro IBM DELL EMC RiverLine ThoughtWorks InnovateUK Cornel U. U.Toronto U.Washington Cambridge U. EPFL Amazon TomTom Xored EU H2020 Tetracom U.Glasgow U.Edinburgh ENS Hartree Supercomputing Centre Imperial College London dividiti Krai cTuning foundation

Grigori Fursin and the cTuning foundation donated the open-source CK technology to MLCommons. We continue working with the community via our public MLCommons task force on automation and reproducibility to develop a public optimization platform (MLCommons CK playground) with a common automation language (MLCommons CM) to benchmark, optimize, co-design and deploy Pareto-efficient AI/ML applications across any software and hardware from the cloud to the edge in a collaborative, automated and reproducible way. Here is the press coverage for our community project: Forbes, LinkedIn.

General Motors uses CK to crowd-benchmark and crowd-tune DNN engines and models. CK helps to easily swap different AI/ML frameworks, models, data sets as LEGO(tm) as described in the GM's presentation from the Embedded Vision Summit about CK-powered CNN/SW/HW benchmarking here:

Qualcomm uses CK to prepare and submit MLPerf™ inference benchmark results for their Cloud AI 100 accelerators:


The CK framework powers cKnowledge.io - a prototype of an open platform to automate and simplify development, optimization and deployment of Pareto-efficient ML Systems across continuously changing software and hardware stacks from the cloud to the edge based on user requirements and constraints.


Arm is one of the first and main users of the Collective Knowledge Technology to automate the design of the more efficient computer systems for emerging workloads such as deep learning across the whole SW/HW stack from IoT to HPC. See the HiPEAC info (page 17) and the Arm TechCon'16 demo for more details about Arm and the cTuning foundation using CK to accelerate computer engineering.


A growing community of quantum computing experts and enthusiasts is building Quantum Collective Knowledge (QCK) - a unified workflow to benchmark, compare and optimize traditional and quantum algorithms, and forecast future developments in quantum computing. CK also supports regular, collaborative and reproducible Quantum hackathons while sharing all experimental results on live SOTA scoreboards:

The ReQuEST consortium is developing a scalable and CK-powered tournament framework, a common experimental methodology and an open repository to co-design and share the whole Pareto-efficient software/hardware stacks (accuracy, speed, energy, size, costs) for real-world applications and emerging workloads such as AI and ML across diverse models, data sets and platforms from cloud to edge. ReQuEST also promotes reproducibility of experimental results and reusability of systems research artifacts by standardizing evaluation methodologies and facilitating the deployment of efficient solutions on heterogeneous platforms. The organizers develop a supporting open-source and portable workflow framework to provide unified evaluation and real-time leader-board of shared solutions. ReQuEST promotes quality-awareness to the architecture and systems community, and resource-awareness to the applications community and end-users. We will keep and continuously update the best or original solutions close to a Pareto frontier in a multi-dimensional space of accuracy, execution time, power consumption, memory usage, resiliency, code/hardware/model size, costs and other metrics in a public repository.

Amazon evaluates CK to help users optimize performance, accuracy and speed of AI applications in AWS:

We collaborate with colleagues from TomTom on a model-driven approach for a new generation of adaptive libraries, while automating and crowdsourcing experiments and ML-based modeling using the Collective Knowledge framework.


We collaborate with the EcoSystem Lab from the University of Toronto led by Professor Gennady Pekhimenko to develop portable, customizable and reusable AI benchmarks based on the open-source TBD suite and the Collective Knowledge Workflow framework.


The non-profit cTuning foundation regularly helps European and international projects ( MILEPOST, PAMELA) to automate tedious R&D, develop sustainable research software, perform collaborative and reproducible experiments and share results on public scoreboards powered by CK.


We collaborate with the colleagues from the University of Edinburgh and Glasgow use CK to automate and crowdsource optimization of mathematical libraries and compilers.


We collaborate with the colleagues from ENS Paris to automate and crowdsource polyhedral optimization using CK.


We collaborate with the colleagues from Hartree SuperComputing Center to use CK for customizable and sustainable experimental workflows and collaboratively optimize realistic workloads across various HPC systems.


Xored developed several open-source CK extensions since 2016 including DNN engine optimization front-end and the Android app to crowdsource DNN benchmarking.


University of Cambridge colleagues use Collective Knowledge framework to develop sustainable software, accelerate research, automate experimentation and reuse artifacts. For example, portable and reproducible experimental workflow from the "Software Prefetching for Indirect Memory Accesses" article by Sam Ainsworth and Timothy M. Jones received a distinguished artifact award at CGO'17.


dividiti is an engineering company created by Grigori Fursin and Anton Lokhmotov to provide professional services based on the CK framework and help hardware vendors submit MLPerf™ benchmark results.


KRAI is an engineering company that develops open-source CK workflows to automate MLPerf™ submissions and provides optimization services to co-design efficient SW/HW stacks for robotics.


Alastair Donaldson's group uses CK to automate and crowdsource detection of compiler bugs (crowd-fuzzing of traditional, OpenCL and OpenGL compilers). TETRACOM project "CLmith in Collective Knowledge" won HiPEAC technology transfer award in 2016.


The cTuning foundation is a non-profit R&D organization led by Grigori Fursin. It coordinates and sponsors all CK developments and reproducibility initiatives such as Artifact Evaluation!