Stimuli perceived by machines or existing content in the form of video, text, or audio,is often is unstructured from a machine perspective since the contained information is not readily available for processing. Using machine learning, especially deep learning approaches, enables us to extract the salient information from unstructured data in order to generate a descriptive structured representation that can be use in further applications, in particular for creating models explaining or predicting samples e.g. in recommendation systems.
Off the shelf algorithms, however, are not always suitable for all problems encountered in many applications, in particular if not enough training data is available e.g. in the aerospace context. Therefore I’m looking into how to adapt existing algorithms to particular problems in the media domain, but also how to leverage non-supervised algorithms, e.g. deep reinforcement learning, in order to mitigate the lack of training data.