Lost in technological chaos?

  • Tired of keeping up with the ever changing SW/HW stack?
  • Lost in the ever growing number of design choices?
  • Never find time and resources to optimize your workload and tune models?
  • Spend more time on ad-hoc experimentation than on innovation?
  • End up with under-performing (uncompetitive) and over-provisioned (expensive) products?

Use Collective Knowledge SDK
to clean up this mess

Our open-source Collective Knowledge framework (see GitHub, Wiki, portable workflows, notable use cases) helps gradually convert ad-hoc experimental workflows and artifacts into customizable, reusable and portable components with unified JSON API with help of the community:
CK also allows users add, automatically plug in and test numerous sub-packages from the ck-math repository as well as various models and datasets.
See the following Wikipedia article about some notable CK use cases.



Our vision for CK-powered collaborative AI R&D is to follow the Linux and Wikipedia way, meaning open and free!



A growing passionate community of researchers is contributing cross-platform packages and experimental workflows.



CK components can be assembled together like LEGO® to quickly build experimental workflows and focus on problem-solving.

Participate in collaborative AI optimization

CK provides unique, open-source and customizable test-bed to crowdsource multi-objective autotuning experiments across diverse hardware (from mobile devices and IoT to supercomputers and cloud servers), models and inputs with the help of the community:
ck-env, ck-autotuning, ck-crowdtuning, ck-web
(CPC'15, JSP'14, DATE'16).
We reuse CK functionality to implement crowd-benchmarking and crowd-tuning of AI approaches to meet the performance, power consumption, prediction accuracy, memory usage and cost requirements for a wide range of applications and for deployment on a wide range of form factors - from sensors to self-driving cars (IWOCL'16 article).
For example, download our engaging Android app from Google Play to participate in collaborative benchmarking, optimization and testing of deep learning and other algorithms based on the CK engine.

Share results with the community

View the latest results shared by the community during optimization crowdsourcing in the public CK repository.
View and reuse information about participating platforms (OS,CPU,GPU,GPGPU,etc) in the ck-crowdtuning-platforms repository.
View some sample CK-powered Jupyter Notebooks shared by dividiti here:
See CK-powered artifacts and workflows shared during Artifact Evaluation at the leading computer systems conferences including CGO, PPoPP and PACT:

Develop simple and common AI API

We develop an open, simple and unified CK API for AI connected to various collaboratively optimized DNN engines (Caffe, TensorFlow, etc) and models across diverse platforms (online demo and wiki).

See example of a CK Json API to predict compiler flags using machine learning:

import ck.kernel as ck


if r['return']>0: ck.err(r)
or to classify images using various self-optimizing DNN engines and models in a cloud or locally:
import ck.kernel as ck

compiler='LLVM 4.0.0'
# GCC autotuning stats for RPi3

features= ["9.0", "4.0", "2.0", "0.0", "5.0", "2.0", "0.0", "4.0", "0.0", "0.0", 
           "2.0", "0.0", "7.0", "0.0", "0.0", "10.0", "0.0", "0.0", "1.0", "2.0", 
           "10.0", "4.0", "1.0", "14.0", "2.0", "0.714286", "1.8", "3.0", "0.0", 
           "4.0", "0.0", "3.0", "0.0", "3.0", "0.0", "0.0", "0.0", "0.0", "0.0", 
           "0.0", "2.0", "0.0", "1.0", "0.0", "1.0", "5.0", "10.0", "2.0", "0.0", 
           "32.0", "0.0", "10.0", "0.0", "0.0", "0.0", "0.0", "3.0", "33.0", "12.0", 
           "32.0", "93.0", "14.0", "19.25", "591.912", "11394.3"]

if r['return']>0: ck.err(r)

ck.out('Predicted optimization: '+r['predicted_opt'])

Contribute to Collective Knowledge!