We made a special effort to simplify CK installation with minimal dependencies across Linux, Windows and MacOS. You just need Git client and Python 2.7+ or 3.3+. Just try it yourself as described here. You can then follow a simple tutorial to get the first feeling about CK capabilities (portable, customizable and reusable workflows and artifacts with a portable package manager).
List of practical CK use cases from the community:
CK allows users to provide a common structure to their local ad-hoc code and data, and pack them with associated CK Python wrappers and JSON API into CK repositories (a - typical ad-hoc experimental packs for Artifact Evaluation, b - unified and reusable experimental repositories such as the one for CGO'17 paper or ACM ReQuEST tournaments). This allows the community to collaboratively validate, improve and build upon shared research artifacts thus enabling practical open science as described in the ACM ReQuEST-ASPLOS'18 proceedings front matter and this interactive and reproducible paper.
Such repositories can be easily shared and reused via public or private services including GitHub, BitBucket and GitLab. We keep track of some CK repositories, workflows, software detection plugins, portable packages and reusable CK modules. Feel free to add your own ones! In the future, we would like to assign DOI for stable artifacts and make a web-service to automatically find their location.
For example, Association for Computing Machinery (ACM) evaluates Collective Knowledge technology to share artifacts and experimental workflows as reusable and customizable components along with reproducible publications (see their video about CK).
ReQuEST consortium is also developing a scalable tournament framework, a common experimental methodology and an open repository powered by CK for continuous evaluation and optimization of the quality vs. efficiency Pareto optimality of a wide range of real-world applications, libraries and models across the whole hardware/software stack on complete platforms. See ACM proceedings, results report and live scoreboard.
Researchers can now use CK SDK to convert their complex and ad-hoc experimental scripts into unified CK workflows assembled from shared CK artifacts as LEGO bricks:
Furthermore, CK has its own portable package manager for Linux, Windows, Android and MacOS to automatically adapt such workflows to underlying software, detect multiple versions of installed dependencies which can co-exist and used in parallel (see available CK software detection plugins), and automatically install missing packages:
Such approach can considerably simplify artifact evaluation and validation of experimental results at conferences and journals. Furthermore, such workflows can be reused and collaboratively improved by the community (rather than just archiving Docker images which become quickly outdated) thus enabling practical, agile and open research!
For example, experimental workflow for the CGO'17 article from the University of Cambridge researchers which won distinguished artifact award was implemented using CK:
CK has an integrated web server allowing users to quickly prototype various web-based dashboards to run their workflows and analyze experimental results in workgroups. CK even allows creation of interactive reports and articles assembled from CK artifacts.
Here are some real examples of practical CK dashboards and interactive articles:
Furthermore, we can now generate the whole collaborative websites using CK:
non-profit cTuning foundation: [ cTuning.org ]
Grigori Fursin's personal website with all cross-linked CK artifacts from past R&D: [ fursin.net/research ]
Integrated CK web server combined with unified JSON API of CK components allows workflows running on different machines to interact with each other via JSON-based web interfaces. This, in turn, allows users to effectively crowdsource experiments across diverse platforms (mobile devices, tablets, laptops, servers, cloud, supercomputers) provided by supporters similar to SETI@HOME. Results can be aggregated in public or private CK repositories for further visualization, analysis and improvement.
Colleagues from the Imperial College London used CK to crowdsource OpenCL compiler bug detection (HiPEAC technology transfer award winner):
CK helps our partners collaboratively benchmark their workloads such as deep learning across diverse platforms. You can test a simple example of compiling and running some shared workload on your own machine as follows:
$ ck pull repo:ck-autotuning $ ck pull repo:ctuning-programs
You can see shared programs in the CK format (JSON meta information describing how to compile and run shared program with all dependencies and data sets):
$ ck list program
You can now compile a given program simply as follows:
$ ck compile program:cbench-automotive-susan --speed
Note, that CK will detect all available versions of required compilers (GCC, LLVM, ICC ...) and libraries on your system using customizable cross-platform package and environment manager with JSON API and meta. If some software dependencies are missing, CK will automatically install required packages as described here.
Now you can run a given workload simply as follows:
$ ck run program:cbench-automotive-susan
CK will ask you which command line to use and will automatically detect or install all required data sets.
If you have Android NDK/SDK installed on your host machine and Android device connected via ADB, you can simply compile and run the same problem on this device as follows:
$ ck compile program:cbench-automotive-susan --speed --target_os=android21-arm-v7a $ ck run program:cbench-automotive-susan --target_os=android21-arm-v7a
Arm uses such CK workflows together with low-level tools to automate benchmarking and optimization of realistic workloads across diverse hardware.
High-level CK workflows together with unified JSON API allowed us to implement universal, customizable, multi-objective and multi-dimensional autotuning as described here.
Furthermore, we can now crowdsource exploration of large and non-linear design and optimization spaces to improve performance, energy, accuracy, memory consumption and other characteristics or automatically detect bugs across diverse workloads and platforms provided by volunteers.
You can check our shared workflow to crowdsource optimization as follows:
$ ck pull repo:ck-crowdtuning $ ck crowdsource experiment
For example, you can execute shared workflow for collaborative program optimization with all related artifacts, and start participating in multi-objective crowdtuning simply as follows:
$ ck crowdtune program
You can also crowd-tune GCC on Windows as follows:
$ ck crowdtune program --gcc --target_os=mingw-64
If you have GCC or LLVM compilers installed, you can start continuously crowd-tune their optimization heuristics in a quiet mode (for example overnight) via
$ ck crowdtune program --llvm --quiet $ ck crowdtune program --gcc --quiet
This experimental workflow will be optimizing different shared workloads for multiple objectives (execution time, code size, energy, compilation time, etc) using all exposed design and optimization knobs, while sending most profitable optimization choices to the public CK-based server.
CK server will, in turn, perform on-line learning to classify optimization versus workloads which can be useful for compiler/hardware designers and performance engineers (described in more detail in this article).
You can even use our small Android application to crowdsource tuning of GCC and LLVM compiler optimization heuristics while continuously learning and aggregating optimization results in the public CK repository.
You can also participate in crowdtuning of popular third-party OpenCL, OpenMP and CUDA-based mathematical libraries as described here.
Having a common experimental infrastructure allows us to build reusable, realistic, diverse, and continuously evolving training sets in a common format (programs, data sets, models, unexpected behavior, mispredictions) with the help of our partners and the community.
See the following examples of shared training sets:
Our personal ultimate goal behind CK development is to a) reinvent computer engineering and make it more collaborative, reproducible and reusable, b) develop efficient and reliable computer systems from IoT to supercomputers, c) enable open science via reusable and customizable artifacts, and d) create a public repository of reusable AI artifacts (models, data sets, tools, etc) and portable AI algorithms/workflows (classification, detection, etc). This should help us enable open AI research, boost innovation in science and technology, get back to our AI-related projects, develop artificial brain and have fun ☺!
See ACM ReQuEST-ASPLOS'18 proceedings front matter and our motivation slides:
See how we combine together all above techniques and tools to enable CK-powered open AI research.