Here we provide details how to reproduce experiments via Collective
Knowledge Framework and Repository (CK). You can download and install all related CK repositories
on your local machine via CMD simply asck pull repo:reproduce-carp-project ck pull repo:reproduce-ck-paper-large-experiments ck pull repo:reproduce-ck-paper ck pull repo:reproduce-pamela-project ck pull repo:reproduce-pamela-project-small-datasetWe would like to thank the community for validating various results and sharing unexpected behavior with us via our public CK repository! |
![]() |
|
Briefly, CK allows to organize and share artifacts (code and data) as reusable Python components
with simple JSON API. Unlike other tools, CK uses agile, schema-free and specification-free
approach. This is particularly important in computer engineering where
software is changing practically every day - indeed, we need
to avoid wasting months or weeks on experiment specification when
experimental results may become easily outdated within days.
Instead, only when research idea is quickly prototyped and validated,
CK allows to collaboratively add and improve meta-description of
interfaces and artifacts via simple and human readable JSON format.
( see the following papers for more details
[1, 2]).
Such components can be easily connected into experimental workflows (pipelines)
as LEGO (TM) to quickly prototype research ideas, crowdsource experimentation,
preserve and query results in a third-party HADOOP-based Elastic Search repository,
use universal and multi-objective autotuning, apply predictive analytics,
share as standard zip archive or via GitHub/Bitbucket for artifact evaluation
at conferences, workshops and journals, or enable live and interactive articles, etc:
CK scripts: reproduce-filter-speedup
Data set 1 image-raw-bin-fgg-office-day-gray (features) |
Data set 2 (article explaining this unexpected behavior) image-raw-bin-fgg-office-night-gray (features) |
![]() |
![]() |
Lenovo X240; Intel(R) Core(TM) i5-4210U CPU @ 1.70GHz; Ubuntu 14.04 64bit; GCC 4.4.4
(CK public repo: all experiments,
compiler description,
all compilers)
Dataset image-raw-bin-fgg-office-day-gray: | Dataset image-raw-bin-fgg-office-night-gray: | ||
Optimization: | Binary size: | min time (s); exp time (s); var (%): | min time (s); exp time (s); var (%): |
-O3 | 10776 | 4.622 ; 4.634 ; 0.7% | 4.630 ; 4.653 ; 1.0% |
-O3 -fno-if-conversion | 10784 | 5.169 ; 5.193 ; 1.0% (Slow down over -O3: 1.12) | 4.091 ; 4.094 ; 0.2% (Speed up over -O3: 1.14) |
-O2 | 10168 | 4.631 ; 4.754 ; 10.2% | 4.623 ; 4.639 ; 0.7% |
-O1 | 10152 | 4.621 ; 4.633 ; 0.8% | 4.623 ; 4.685 ; 3.6% |
-Os | 9744 | 4.668 ; 4.678 ; 0.6% | 4.666 ; 4.685 ; 0.9% |
Lenovo X240; Intel(R) Core(TM) i5-4210U CPU @ 1.70GHz; Ubuntu 14.04 64bit; GCC 4.9.1
(CK public repo: all experiments,
compiler description,
all compilers)
Dataset image-raw-bin-fgg-office-day-gray: | Dataset image-raw-bin-fgg-office-night-gray: | ||
Optimization: | Binary size: | min time (s); exp time (s); var (%): | min time (s); exp time (s); var (%): |
-O3 | 11008 | 4.619 ; 4.630 ; 0.6% | 4.603 ; 4.628 ; 1.0% (slower than GCC 4.4.4 -O3 -fno-if-conversion) |
-O3 -fno-if-conversion | 11008 | 4.615 ; 4.625 ; 0.6% | 4.624 ; 4.628 ; 0.3% (slower than GCC 4.4.4 -O3 -fno-if-conversion) |
-O2 | 10880 | 4.632 ; 4.635 ; 0.1% | 4.602 ; 4.647 ; 2.5% |
-O1 | 10360 | 4.625 ; 4.637 ; 0.7% | 4.630 ; 4.654 ; 1.8% |
-Os | 10376 | 4.635 ; 4.653 ; 0.8% | 4.630 ; 4.652 ; 0.8% |
Lenovo X240; Intel(R) Core(TM) i5-4210U CPU @ 1.70GHz; Ubuntu 14.04 64bit; GCC 5.2.0
(CK public repo: all experiments,
compiler description,
all compilers)
Dataset image-raw-bin-fgg-office-day-gray: | Dataset image-raw-bin-fgg-office-night-gray: | ||
Optimization: | Binary size: | min time (s); exp time (s); var (%): | min time (s); exp time (s); var (%): |
-O3 | 10776 | 4.622 ; 4.632 ; 0.8% | 4.630 ; 4.631 ; 0.2%(slower than GCC 4.4.4 -O3 -fno-if-conversion) |
-O3 -fno-if-conversion | 10776 | 4.629 ; 4.649 ; 0.8% | 4.610 ; 4.626 ; 1.1%(slower than GCC 4.4.4 -O3 -fno-if-conversion) |
-O2 | 10568 | 4.599 ; 4.610 ; 0.6% | 4.597 ; 4.603 ; 0.4% |
-O1 | 10032 | 4.613 ; 4.616 ; 0.2% | 4.605 ; 4.616 ; 0.8% |
-Os | 10088 | 4.609 ; 4.630 ; 1.4% | 4.608 ; 4.615 ; 0.4% |
Just for comparison during crowd-benchmarking: Samsung Chromebook 2; Samsung EXYNOS5; ARM Cortex A15/A7; ARM Mali-T628; Ubuntu 12.04 32bit; GCC 4.9.2
(CK public repo: all experiments,
compiler description,
all compilers)
Dataset image-raw-bin-fgg-office-day-gray: | Dataset image-raw-bin-fgg-office-night-gray: | ||
Optimization: | Binary size: | min time (s); exp time (s); var (%): | min time (s); exp time (s); var (%): |
-O3 | 7416 | 7.396 ; 7.513 ; 2.9% | 7.390 ; 7.464 ; 2.6% |
-O3 -fno-if-conversion | 7424 | 7.345 ; 7.455 ; 3.8% | 7.384 ; 7.490 ; 2.6% |
-O2 | 7100 | 7.398 ; 7.926 ; 39.2% | 7.450 ; 7.514 ; 2.3% |
-O1 | 7072 | 7.404 ; 7.444 ; 1.4% | 7.389 ; 7.443 ; 2.9% |
-Os | 6292 | 7.367 ; 7.409 ; 1.5% | 7.375 ; 7.479 ; 2.7% |
Interactive graph of slambench (OpenCL)
multi-objective algorithm autotuning (FPS vs accuracy or any other characteristics with Pareto frontier)
Non-interactive graph of analysis of CPU/GPU with memory transfer and GPU only time of HOG image processing applications
(machine learning algorithms) versus different OpenCL optimizations and hardware configurations
3 interactive graphs of (CPU time) / (GPU with memory transfer time) for above HOG application vs experiment number used for further predictive analytics to enable predictive scheduling at run-time (see our paper for motivation).
Building predictive models via CK (scikit-learn and decision trees with different features and depth)
Feature set 1 | Feature set 2 | Feature set 3 | ||||||||
Platform | Depth 2 | Depth 2 | Depth 3 | Depth 4 | Depth 5 | Depth 2 | Depth 3 | Depth 4 | Depth 5 | Depth 6 |
Samsung Chromebook 1 (Mali T604); own models | 95.0% / 97.8% | 97.8% / 97.8% | 100.0% / 97.8% | 100.0% / 99.3% | 100.0% / 99.6% | 100.0% / 100.0% | ||||
Samsung Chromebook 2 (Mali T628); models from Chromebook 1 | 51.1% / 51.7% | 30.0% / 51.5% | 30.0% / 51.8% / 52.6% | |||||||
Samsung Chromebook 2 (Mali T628); own models | 85.0% / 82.3% / 88.3% | 95.0% / 90.2% / 93.0% | 95.0% / 91.0% / 93.5% | 95.0% / 90.2% / 93.0% | 95.0% / 90.2% / 93.0% | 95.0% / 90.2% / 93.0% | 100.0% / 95.3% / 96.1% | 100.0% / 98.2% / 98.3% | 100.0% / 99.3% / 98.9% | 100.0% / 100.0% / 99.6% |