Welcome to this meetup -- Examples of AI research at Chalmers University of Technology!
This time, Juliano Pinto, Adam Andersson and Oscar Carlsson from Chalmers University of Technology will share some of their work and discuss how they use model-based deep learning and geometrical deep learning to improve generalization and at the same time naturally increase explainability. Please see the abstracts below for more detailed descriptions of their talks.
This event will be held online with the possibility for the audience to ask questions. You can access the zoom webinar through this link:
https://us02web.zoom.us/j/87296451559?pwd=VmV0elpIcGpETlBlektXWW5nc2xVQT09
Hope to see you there!
SCHEDULE:
17.30 Welcome
17.35 Talk+Q&A: Multi-object tracking with Transformers by Juliano Pinto
18.05 Discussion activity by focus group in breakout rooms
18.30 Break
18.35 Talk+Q&A: Elements of model-based deep learning by Adam Andersson
19.05 Break
19.20 Talk+Q&A: Introduction to Geometric Deep Learning by Oscar Carlsson
19.50 End of the event -- Thank you for joining!
ABSTRACTS:
Multi-object tracking with Transformers by Juliano Pinto
This talk will describe the research I have been working on: using Transformer-based neural networks to perform the task multi-object tracking (MOT). I'll describe what is the MOT task in detail, what are traditional approaches to solving it (briefly), what are Transformer models, and the interesting results obtained so far in applying Transformer models to MOT, along with potential future research directions. The talk will be based mostly around the results shown in https://arxiv.org/abs/2104.00734 , but I also plan to describe new work in progress.
Elements of model-based deep learning by Adam Andersson
To obtain computationally feasible algorithms the most realistic models are seldom applicable but simplified models are instead often used. These models have by definition worse fit than more accurate models. On the other side of the spectrum are the completely data-driven models such as deep-neural networks. They are well known to perform very well in the presence of large data, but they are parameterised by huge number of parameters with low degree of interpretability. In the middle is model-based deep learning. It is based on classical models and deep learning is used one way or the other to improve performance, often in terms of computational time but sometimes also in accuracy. In this talk I give an incomplete overview of model-based deep learning with some applications. These two papers (https://arxiv.org/pdf/1912.10557.pdf , https://arxiv.org/pdf/2012.08405.pdf ), being written from a signal-processing perspective will form the base of parts of the talk.
Introduction to Geometric Deep Learning, applications and outlook, by Oscar Carlsson
Convolutional Neural Networks tend to perform well when there is a “spatial/geometric structure” in the data; the most well known example of which are images where nearby pixels are generally dependent on each other. CNNs “respect” this “geometric prior” whereas fully connected layers do not. This is a major reason for CNNs overall good performance on image-related tasks. Geometric deep learning (GDL) then is a branch of deep learning that tries to incorporate these concepts of “geometric priors” and symmetries for different types of data -- both Euclidean and non-Euclidean -- into deep learning and highlight the diverse set of problems this approach can help with. In this talk I will try to give an introduction to GDL, its different formulations, domains, problems, as well as applications and outlooks. The talk will be primarily based on the following preprints: https://arxiv.org/abs/2104.13478 , https://arxiv.org/abs/1611.08097 , as well as an upcoming paper.