Smart Representations for Big Data Analytics (Sommersemester 2018)
(Knowledge Discovery and Data Mining)
- Weekly Hours: 4
- Credits: 6
- Enrolment Deadline: 20.04.2018
- Teaching Form: Seminar
- Enrolment Type: Compulsory Elective Module
- Maximum number of participants: 9
Programs & Modules
- BPET-Konzepte und Methoden
- BPET-Techniken und Werkzeuge
- OSIS-Konzepte und Methoden
- OSIS-Techniken und Werkzeuge
Smart representations (such as embeddings, graphical models, discretizations) are useful models that allow the abstraction of data within a well-defined mathematical formalism. The representations we aim at are conceptual abstractions of real world phenomena (such as sensor reading, causal dependencies, social interactions) into the world of statistics and discrete mathematics in such a way that the powerful tools developed in those areas are available for complex analyses in a simple and elegant manner.
Usually data is transformed explicitly or implicitly from raw data representation (as it was measured or collected) into a smart data representation (more useful for data analysis). One goal of such smart representations, e.g. with a higher level of abstraction, is to enable the application of data mining techniques and theory developed in different areas. Smart data representations in many cases also induce a reduction of the original data mining problem into a more tractable or more compact problem formulation that can be solved by an algorithm (e.g. with lower worst case complexity, scalable to larger data sizes, more robust to data artifacts, etc.).
In this seminar we will focus on four smart data representations with the aim of understanding the analytical properties of different data mining tasks:
representations + similarities of graphs for classification 
representations of natural time series baselines for outlier interpretation 
representing dependencies in event series and time series 
representations of incomplete data via imputation 
The main focus in each of these three areas will be the understanding and comparison of smart representations and their explicit/implicit data transformation methods. By transforming the data we will study limitations or advantages of each technique and how the data representation changes the problem setup, reduces complexity, introduces robustness, or other valuable properties for big data analytics.
 Verma, Saurabh, and Zhi-Li Zhang. "Hunt For The Unique, Stable, Sparse And Fast Feature Learning On Graphs." NIPS, 2017.
 Tsitsulin, Anton, Davide Mottin, Panagiotis Karras, and Emmanuel Müller. "VERSE: Versatile Graph Embeddings from Similarity Measures." WWW, 2017.
 Yanardag, Pinar, and S. V. N. Vishwanathan. "Deep graph kernels." KDD, 2015.
 Sundararajan et al. "Axiomatic Attribution for Deep Networks" ICLM 2017
 Riberio et al., "Why should I trust you?" Explaining the Predictions of Any Classifier, KDD 2016
 N. Du, H. Dai et al.: "Recurrent Marked Temporal Point Processes: Embedding Event History to Vector." KDD, 2016.
 S. Agrawal, G. Atluri, et al.: "Tripoles: A New Class of Relationships in Time Series Data." KDD, 2017.
 Lovedeep Gondara, Ke Wang: “MIDA: Multiple Imputation using Denoising Autoencoders”. Proceedings of the 22nd Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD) 2018.
 Stef van Buuren: “Flexible Imputation of Missing Data" March 29, 2012 by Chapman and Hall/CRC.
Further information will be found here.
Präsentation and Paper
Kickoff-Meeting: Monday 23. April 9:15 in room D-E.9/10