We are excited to join forces with MLCommons and OctoML.ai! Contact Grigori Fursin for more details!
[ Project news, Project overview, ACM TechTalk, Reddit disccusion ]
We thank all CK partners and users for very interesting proof-of-concept projects as well as their encouragement, feedback and support:
MLCommons.org (MLPerf benchmark) OctoML.ai ACM, the Raspberry Pi foundation, and ML&systems conferences cKnowledge.io Qualcomm General Motors Arm Linaro IBM DELL EMC RiverLine ThoughtWorks InnovateUK Cornel U. U.Toronto U.Washington Cambridge U. EPFL Amazon TomTom Xored EU H2020 Tetracom U.Glasgow U.Edinburgh ENS Hartree Supercomputing Centre Imperial College London dividiti Krai cTuning foundation

We collaborate with MLCommons to automate MLPerf™ inference benchmark (a broad ML benchmark suite for measuring performance of ML software frameworks, ML hardware accelerators, and ML cloud platforms) and make it easier to submit and reproduce results with the help of portable CK workflows and meta packages.

News: the cTuning foundation and OctoML donated CK and MLOps components to MLCommons in September 2021 to continue further developments within a new working group to automate design space exploration of ML Systems - more details to come soon!

We collaborate with OctoML.ai to develop a platform that helps end-users automate the whole ML/SW/HW stack optimization and deployment across diverse models, data sets, frameworks and platforms from the cloud to the edge in terms of speed, accuracy, energy and costs.

Qualcomm uses CK to prepare and submit MLPerf™ inference benchmark results for their Cloud AI 100 accelerators:

cKnowledge.io is an open platform developed by Grigori Fursin (CK author) to automate software/hardware co-design for the realistic AI/ML tasks from end-users based on their requirements and constraints. This platform aggregates portable CK workflows and components from the community for diverse ML models, data sets, frameworks and platforms from the cloud to the edge. It is then possible to perform automated design space exploration of AI/ML/SW/HW stacks and find the most optimal ones on a Pareto frontier in terms of accuracy, latency, throughput, energy, costs and other characteristics that satisfy user constraints. This platform was acquired by OctoML.ai in 2021.

General Motors uses CK to crowd-benchmark and crowd-tune DNN engines and models. CK helps to easily swap different AI/ML frameworks, models, data sets as LEGO(tm) as described in the GM's presentation from the Embedded Vision Summit about CK-powered CNN/SW/HW benchmarking here:

Arm is one of the first and main users of the Collective Knowledge Technology to automate the design of the more efficient computer systems for emerging workloads such as deep learning across the whole SW/HW stack from IoT to HPC. See the HiPEAC info (page 17) and the Arm TechCon'16 demo for more details about Arm and the cTuning foundation using CK to accelerate computer engineering.

A growing community of quantum computing experts and enthusiasts is building Quantum Collective Knowledge (QCK) - a unified workflow to benchmark, compare and optimize traditional and quantum algorithms, and forecast future developments in quantum computing. CK also supports regular, collaborative and reproducible Quantum hackathons while sharing all experimental results on live SOTA scoreboards:

The ReQuEST consortium is developing a scalable and CK-powered tournament framework, a common experimental methodology and an open repository to co-design and share the whole Pareto-efficient software/hardware stacks (accuracy, speed, energy, size, costs) for real-world applications and emerging workloads such as AI and ML across diverse models, data sets and platforms from cloud to edge. ReQuEST also promotes reproducibility of experimental results and reusability of systems research artifacts by standardizing evaluation methodologies and facilitating the deployment of efficient solutions on heterogeneous platforms. The organizers develop a supporting open-source and portable workflow framework to provide unified evaluation and real-time leader-board of shared solutions. ReQuEST promotes quality-awareness to the architecture and systems community, and resource-awareness to the applications community and end-users. We will keep and continuously update the best or original solutions close to a Pareto frontier in a multi-dimensional space of accuracy, execution time, power consumption, memory usage, resiliency, code/hardware/model size, costs and other metrics in a public repository.

Amazon evaluates CK to help users optimize performance, accuracy and speed of AI applications in AWS:

We collaborate with colleagues from TomTom on a model-driven approach for a new generation of adaptive libraries, while automating and crowdsourcing experiments and ML-based modeling using the Collective Knowledge framework.

We collaborate with the EcoSystem Lab from the University of Toronto led by Professor Gennady Pekhimenko to develop portable, customizable and reusable AI benchmarks based on the open-source TBD suite and the Collective Knowledge Workflow framework.

The non-profit cTuning foundation regularly helps European and international projects ( MILEPOST, PAMELA) to automate tedious R&D, develop sustainable research software, perform collaborative and reproducible experiments and share results on public scoreboards powered by CK.

We collaborate with the colleagues from the University of Edinburgh and Glasgow use CK to automate and crowdsource optimization of mathematical libraries and compilers.

We collaborate with the colleagues from ENS Paris to automate and crowdsource polyhedral optimization using CK.

We collaborate with the colleagues from Hartree SuperComputing Center to use CK for customizable and sustainable experimental workflows and collaboratively optimize realistic workloads across various HPC systems.

Xored developed several open-source CK extensions since 2016 including DNN engine optimization front-end and the Android app to crowdsource DNN benchmarking.

University of Cambridge colleagues use Collective Knowledge framework to develop sustainable software, accelerate research, automate experimentation and reuse artifacts. For example, portable and reproducible experimental workflow from the "Software Prefetching for Indirect Memory Accesses" article by Sam Ainsworth and Timothy M. Jones received a distinguished artifact award at CGO'17.

dividiti is an engineering company in Cambridge providing professional services based on the CK framework to help hardware vendors submit MLPerf™ benchmark results.

KRAI is an engineering company that develops open-source CK workflows to automate MLPerf™ submissions and provides optimization services to co-design efficient SW/HW stacks for robotics.

Alastair Donaldson's group uses CK to automate and crowdsource detection of compiler bugs (crowd-fuzzing of traditional, OpenCL and OpenGL compilers). TETRACOM project "CLmith in Collective Knowledge" won HiPEAC technology transfer award in 2016.

The cTuning foundation is a non-profit R&D organization led by Grigori Fursin. It coordinates and sponsors all CK developments and reproducibility initiatives such as Artifact Evaluation!