About Partners Use cases AI Development Repo Contact
Join the CK consortium to participate in collaborative learning and AI/SW/HW co-design across diverse platforms and environments.

Need to adapt to a Cambrian AI/SW/HW explosion
and technological chaos?

  • Tired of keeping up with the ever changing AI/SW/HW stack?
  • Lost in the ever growing number of design choices?
  • End up with under-performing and expensive software and hardware?
  • Never find time and resources to optimize your workload and tune models?
  • Spend more time on ad-hoc experimentation than on innovation?
  • Lose ad-hoc research software and artifacts when leading researchers leave?

Together with the community we develop the open-source Collective Knowledge technology to address these issues, collaboratively co-design efficient AI/SW/HW stack and accelerate knowledge discovery!

Join the open consortium led by non-profit cTuning foundation and dividiti Ltd to participate in the following community activities powered by the CK.

Join collaborative, reproducible and reusable AI research powered by Collective Knowledge!

We are bringing together academia and industry led by the non-profit cTuning foundation and dividiti Ltd to use open research SDK (Collective Knowledge) and
  1. build an open AI repository of reusable, portable and customizable AI artifacts (models, data sets, tools) with CK Python wrappers and a common JSON API;
  2. assemble portable AI algorithms/workflows (classification, object detection, speech recognition) with a unified API from reusable AI artifacts;
  3. collaboratively test, optimize and co-design deep learning and other emerging AI workloads in terms of speed, accuracy, size, energy and other costs across the whole SW/HW/model/dataset stack from IoT to supercomputers;
  4. enable open AI research and crowdsource AI experiments such as machine learning and model training across numerous platforms provided by volunteers (potentially billions of devices) similar to SETI@HOME;
  5. boost innovation in science and technology;
  6. develop AI brain!
Collective Knowledge Framework was especially designed to support above cases - combined with portable workflows it helps users gradually convert their ad-hoc AI artifacts (code and data) and experimental workflows into customizable, portable and reusable components with Python wrappers, common JSON API and unified JSON meta information:

Participate in collaborative testing, optimization and co-design of the whole AI/SW/HW stack via open competitions

Since CK enables crowdsourcing of experiments such as multi-objective autotuning across diverse platforms similar to SETI@home, we implemented a prorotype of CK-powered crowd-fuzzing, crowd-benchmarking and crowd-tuning workflows for a wide range of applications and for deployment on a wide range of form factors - from sensors to self-driving cars (IWOCL'16 article).

CK partners use such workflows with the help of the non-profit cTuning foundation and dividiti Ltd to co-design efficient AI solutions such as deep learning across the whole SW/HW stack from IoT to HPC which meet given trade-offs in performance, power consumption, prediction accuracy, memory usage and cost requirements:

Download our engaging Android app from Google Play to help the community prototype collaborative benchmarking, optimization and testing of deep learning engines including Caffe and TensorFlow, models and data sets. You can see continuously aggregating results in the public CK repository:
  • The number of unique users participated in DNN crowd-benchmarking and crowd-tuning: 600+
  • The number of unique platforms provided by volunteers: 790+
  • The number of unique CPUs inside participated platforms: 260+
  • The number of unique GPUs inside participated platforms: 110+
  • The number of unique OS: 280+
Our partner, dividiti, regularly shares public and processed benchmarking and optimization results across different DNN engines, models and hardware in a reproducible CK format using convenient Jupyter notebooks:

Design common and simple AI API

We are developing an open, simple and unified CK API for AI connected to various collaboratively optimized DNN engines (Caffe, TensorFlow, etc) and models across diverse platforms (online demo and wiki).

See a CK JSON API example to predict compiler flags using machine learning (part of our long-term initiative to enable self-optimizing computer systems (article 1 and article 2)

    import ck.kernel as ck

    image='use_dnn_to_classify_image.jpg'

    r=ck.access({'action':'ask',
                 'module_uoa':'advice',
                 'to':'classify_image',
                 'image':image})
    if r['return']>0: ck.err(r)
  
or to classify images using various self-optimizing DNN engines and models in a cloud or locally:
    import ck.kernel as ck

    scenario='experiment.tune.compiler.flags.llvm.e'
    compiler='LLVM 4.0.0'
    # GCC autotuning stats for RPi3
    cpu_name='BCM2709'

    features= ["9.0", "4.0", "2.0", "0.0", "5.0", "2.0", "0.0", "4.0", "0.0", "0.0", 
               "2.0", "0.0", "7.0", "0.0", "0.0", "10.0", "0.0", "0.0", "1.0", "2.0", 
               "10.0", "4.0", "1.0", "14.0", "2.0", "0.714286", "1.8", "3.0", "0.0", 
               "4.0", "0.0", "3.0", "0.0", "3.0", "0.0", "0.0", "0.0", "0.0", "0.0", 
               "0.0", "2.0", "0.0", "1.0", "0.0", "1.0", "5.0", "10.0", "2.0", "0.0", 
               "32.0", "0.0", "10.0", "0.0", "0.0", "0.0", "0.0", "3.0", "33.0", "12.0", 
               "32.0", "93.0", "14.0", "19.25", "591.912", "11394.3"]

    r=ck.access({'action':'ask',
                 'module_uoa':'advice',
                 'to':'predict_compiler_flags',
                 'scenario':scenario,
                 'compiler':compiler,
                 'cpu_name':cpu_name,
                 'features':features})
    if r['return']>0: ck.err(r)

    ck.out('Predicted optimization: '+r['predicted_opt'])
  

Prepare representative training sets with the community

Our Android application to crowdsource AI algorithms across diverse platforms provided by volunteers allows to collect mispredictions in a public repository thus helping to improve existing training sets and gradually assemble representative ones!
Our growing consortium is only at the beginning of an exciting and long journey to enable open AI research, develop self-optimizing and self-learning AI systems from IoT to supercomputers, accelerate knowledge discovery, and boost innovation in science and technology - join us!
          
Website designed using CK
          
   
   
   
               Locations of visitors to this page
(C)opyright 2014-2017 non-profit cTuning foundation and dividiti