Dear colleagues,
Hope you had a nice and relaxing summer!
There have been many news related to our Collective Knowledge framework
this summer, so I would like to share some of them with you.
News
We would like to thank Microsoft for providing a 1-year grant
to host cknowledge.org repository at Microsoft Azure cloud!
We will present a CK-based project with ARM at ARM TechCon'16 in Santa Clara in October
(see schedule,
DATE'16 paper and CPC'15 paper).
We will demonstrate Workload Knowledge, an open framework for gathering and sharing
knowledge about system design and optimization using real-world workloads.
Powered by 3 open-source projects (ARM's Workload Automation, cTuning's
Collective Knowledge and Jupyter Notebooks), Workload Knowledge will
dramatically accelerate innovation in computer engineering and lead
to designing highly efficient systems.
We will announce various Collective Knowledge awards for the top contributors
sharing workloads, data sets, tools, autotuning plugins, predictive models and optimization knowledge
at ARM TechCon. Active student contributors will have a priority for internships
at dividiti!
Hurry up to try CK
and join our growing community!
Program crowd-tuning
We would like to thank all participants
for testing our CK-powered
GCC/LLVM crowd-tuning approach using spare mobile phones, tablets, laptops and cloud servers.
We now have more that 300 distinct Android, Windows, Linux and MacOS-based platforms
participated in experiment crowdsourcing (nearly 150 distinct CPU,
and ~ 50 distinct GPUs). You can find all meta shared at
GitHub.
Furthermore, the community collected and shared numerous distinct GCC and LLVM optimizations
for more than 70 (CPU,compiler version) tuples across ~140 shared workloads.
It opens up many interested opportunities for practical research in machine-learning based autotuning, run-time adaptation
and SW/HW co-design!
CK improvements
Above collaborative optimization helped us stabilize CK framework including Android mobile app.
Thanks to the community contributions and fixes, CK autotuning now supports MacOS.
We also added support for continuous integration frameworks for Linux and Windows, for Jupyter
notebooks, and for Docker!
We have released new Collective Knowledge Framework V1.8.1 (available via pip):
* http://github.com/ctuning/ck
We have also released Android application V2.2 with sources to help you participate in various experiment crowdsourcing
using Android-based mobile devices and IoT devices:
* https://play.google.com/store/apps/details?id=openscience.crowdsource.experiments
* https://github.com/ctuning/crowdsource-experiments-using-android-devices
Finally, we provided a new documentation with various Getting Started Guides:
* http://github.com/ctuning/ck/wiki
* http://github.com/ctuning/ck/wiki/Getting-started-guide
* http://github.com/ctuning/ck/wiki/Portable-workflows
Open Science and Reproducible Research
* Congratulations to Dr. Abdul Memon (my last PhD student) for successfully defending his thesis
"Crowdtuning: Towards Practical and Reproducible Auto-tuning via Crowdsourcing and Predictive Analytics"
in the University of Paris-Saclay.
Most of the software, data sets and experiments are not only reproducible
but also shared as reusable and extensible components via Collective Mind
and CK!
* We have moved our "Open Science" wiki with related resource to CK GitHub.
* We have helped with Artifact Evaluation for PACT'16!
* We continue discussions with ACM colleagues about how to enable collaborative and reproducible R&D across all SIGs.
* Please, check this interview with Dr. Anton Lokhmotov
(CEO of dividiti) above how CK can help enable efficient, reliable and cheap computing everywhere!
Plans
After nearly 20 years, we are finally moving back to AI research
and have several projects related to unifying benchmarking, tuning and access to DNN networks via CK:
* http://bit.ly/ck-cnn
* http://github.com/dividiti/ck-caffe
* http://github.com/ctuning/ck-tensorflow
We also work on a unification of benchmarking and multi-objective autotuning across remote devices;
adding more workloads in the CK format; adding more machine-learning based autotuning strategies;
crowdsourcing compiler bug detection; crowdtuning OpenCL and CUDA BLAS libraries; improving documentation further;
unifying predictive analytics including access to DNN frameworks via CK web service.
See our assorted plans here.
You can also check open tickets at GitHub pages of https://github.com/ctuning
and https://github.com/dividiti.
The community
Since there is a growing number of CK-powered collaborative and reproducible projects
as well as CK users and participants in experiment crowdsourcing, we strongly suggest
you to participate in public discussions via
this public mailing list
or LinkedIn group. This will help
the community share knowledge and experience about CK while avoiding common pitfalls!
Have a very productive Fall and looking forward discussing CK projects with you,
Grigori and the Collective Knowledge team