cKnowledge Ltd is a research, engineering and consulting company established by Grigori Fursin in 2019 and headquartered in Paris to support his quest to bridge the growing gap between AI/ML research and production.
We are leading the development of the Collective Knowledge v3 (CK) in collaboration with the community, MLCommons (50+ companies and universities) and the cTuning foundation. Collective Knowledge technology v3 includes CK Playground v3, Collective Mind automation language (CM) and Modular Inference Library (MIL).
We envision CK playground as an open platform to empower everyone from a company expert to a child to understand any state-of-the-art AI/ML technique and automatically optimize and deploy it in the real world across rapidly evolving hardware, software, models and data in the fastest and most efficient way while reducing all risks and slashing all research, development, optimization and operational costs.
CK playground is powered by the portable, technology-agnostic, human-readable and open-source Collective Mind automation language (CM) collaboratively developed within the MLCommons Task Force on Automation and Reproducibility. CM enables collaborative benchmarking and optimizating of AI and ML systems at scale across diverse software, hardware, models and data from different vendors while exposing all the knowledge and experience to the users via CK playground.
The community continuously extends and improves CM via portable and reusable CM scripts to solve the "AI/ML dependency hell", interconnect incompatible software, hardware, models and data, and encode best practices and optimization techniques into powerful automation workflows in a transparent and non-intrusive way.
This collaborative approach is based on more than 15 years of our related experience leading innovative AI, ML and Systems R&D projects and helping Fortune companies bring them to production across rapidly evolving software, hardware and data.
While still in the prototyping stage, our open-source technology already helps MLCommons, students and researchers automate and optimize MLPerf benchmark submissions while contributing to more than half of all performance and power results for MLPerf inference benchmark since the beginning.
We are also developing an LLM-based assistant to help our users benchmark ML systems and automatically select the most efficient software/hardware stacks for emerging workloads based on their requirements and constraints including cost, performance, accuracy, power consumption, size, etc.
Don't hesitate to contact us to learn more about our platform, services and community initiatives.