In total I spent three weeks in Rwanda: the first week I attended the “Theory to Practice” (T2P) workshop, the second week was ICLR, and during the third week I was on vacation, traveling through the country.
Theory to Practice workshop
The workshop consisted of mostly talks, contributed by guests from all over Africa and some (only a small minority) from Europe.
I found it very interesting to see so many application problems, like rain forecasting or poverty prediction from satellite images to stuff like analyzing audio data for detecting wildlife. Overall the talks focused on problems related to the African continent and the speakers were always refreshingly passionate about the problems they were facing.
It was a different community because people not only have to work with much less compute power, but they also have very applied problems, which means that they need to communicate with the local population – which is a considerable problem when the local population is linguistically fragmented. The computational constraints also lead to using more traditional ML techniques, which allow for more efficient inference and better analysis.
Another aspect is that the research was always respectful towards the problem domain that they wanted to improve. To give a hypothetical negative example: an ML researcher discovers a new field, throws a neural network on it an claims to have solved the problem. This fortunately was not the case for this workshop at hand. Local knowledge guided the people and the researchers sought out experts in the field, which could provide valuable insight.
Overall, I am glad that I got to attend the workshop, despite the long days and very in-depth talks. It opened up a new perspective to me, which you often hear about from other people and yet it still feels different experiencing this first-hand.
International Conference on Learning Representations
After a short weekend where we got to explore Kigali, we started with the conference.
Overall, the keynotes were a bit hit or miss. The very first one was contributed by Sofia Crespo, who describes herself as an artist, using AI for installations. What I liked was that they curated the data by themselves, which is no easy task. That, plus the fact that the visual output is important, not necessarily the method, were interesting insights.
Another keynote talk that stuck out was about the «versatile learned optimizer» (VeLO), by Jasha Sohl-Dickstein. VeLO is a neural network that is supposed to replace more traditional optimizers like Adam or SGD. The gist is that they spent ungodly amounts of computing power in order to train the recurrent networks inside of VeLO, such that they will improve the gradients without any further tuning. This somewhat requires that VeLO was trained on problems that are «similar enough» to the one that you’re trying to solve during your own optimization, so I am not fully sure whether it’s able to generalize well enough.
The posters are, of course, the most interesting part of the conference. I was happy to present my poster on the first day (in the second session), so I could explore the other posters. Some interesting posters that I liked were
- Unsupervised Manifold Alignment with Joint Multidimensional Scaling by Dexiong Chen, Bowen Fan, Carlos Oliver, and Karsten Borgwardt,
- Minimalistic Unsupervised Representation Learning with the Sparse Manifold Transform by Yubei Chen, Zeyu Yun, Yi Ma, Bruno Olshausen, and Yann LeCun,
- Towards Understanding and Mitigating Dimensional Collapse in Heterogeneous Federated Learning by Yujun Shi, Jian Liang, Wenqing Zhang, Vincent Tan, and Song Bai,
- Bispectral Neural Networks by Sophia Sanborn, Christian Shewmake, Bruno Olshausen, and Christopher Hillar, and
- Quantus x Climate – Applying explainable AI evaluation in climate science by Philine Bommer, Anna Hedström, Marlene Kretschmer, and Marina Höhne.
Those were just some posters that I found interesting, I didn’t manage to snap a picture of every poster that I found interesting due to battery mismanagement on my end.
Overall, the conference was really interesting, and it was nice to see the interest in my work. What I found surprising was the small presence of industry booths; there were only three in total. Coming from NeurIPS in New Orleans, this was quite surprising, but maybe that is related to the longer distance to the US (where most of the industry was from the last time).