Towards Mining Actionable Cyber Threat Intelligence from Process Behavior Graphs
Masterstudent: Wenzel Pünter
"Threat actors and defenders in Cybersecurity are marching in step: While Cyber Threat Intelligence Exchange formats have relied on static properties of malware in the past, new families use novel techniques like Living-Off-The-Land binaries that render traditional measures ineffective as benign executables are abused for malicious purposes.
This work tackles this issue by proposing a graph-based behavior model with OSINT-enriched relationships, that has been derived from hardware- and OS-defined security boundaries. In addition, it suggests an algorithm for the transformation of log traces to Indicator of Compromise-like subgraphs using the proposed graph schema, a statistical baseline, and a tainting hint. Furthermore, the work presents a case study of this approach using a practical implementation of the proposed data pipeline on real-world malware samples.
The schema outperformed its competition in qualitative metrics, especially in the covered entities and actions. The evaluation of the proposed algorithm on the dataset yielded mixed results: the use of a statistical baseline for filtering on top of the graph schema successfully reduced the behavior graph to the relevant nodes and edges, however, the resulting subgraphs have not been sufficient to re-identify the same malware family reliably.“
Generating Art with Multi-Conditional StyleGANs
Masterstudent: Konstantin Dobler
Creating meaningful art is often viewed as a uniquely human endeavor. In this project, we introduce a multi-conditional Generative Adversarial Network (GAN) approach trained on large amounts of human paintings to synthesize realistic-looking paintings that emulate human art. Our approach is based on the StyleGAN neural network architecture, but incorporates a custom multi-conditional control mechanism that provides fine-granular control over characteristics of the generated paintings, e.g., with regard to the perceived emotion evoked in a spectator. With this network, we are also able to emulate styles of great artists like Monet or van Gogh. To improve the quality of generated paintings, we introduce the conditional truncation trick, which adapts the standard truncation trick for the conditional setting and diverse datasets.
Robust Multi-Agent Reinforcement Learning for Scalable Failure Root-Cause Analysis
Masterstudierende: Christopher Aust, Ulrike Bath, Florence Böttger, Jonas Krah
[Context] Difuse events like pandemics, wars, or natural disasters inflict local changes that can quickly spread globally via human behavior contagion, e.g., the hoarding of toilet papers (Covid-19) and, more recently, sunflower oil (invasion of Ukraine). These events affect how people utilize various distributed systems that operate from medical care and personal finance to travel and e-commerce platforms. As these systems rely more extensively on machine learning prediction models, changes in their usage correspond to unmitigated distribution shifts, also called out-distribution data (OOD), that can quickly degrade the prediction accuracy of these built-in models causing havoc to corresponding systems and their users.
[Problem] We investigated this OOD phenomenon and their mitigation challenges in the realm machine-learning-based self-adaptive systems. In particular, we focus
on how to endow these systems with robustness capabilities under partially observable system states, for instance, how to detect failures whose root-causes are hidden because of uncertain patterns of failure propagation.
[Approach] Our approach combined two strategies: (1) vertical separation-of-concerns among monitoring, analysis, planning, and execution failure fix actions (via a two-layered multi-agent reinforcement learning architecture) and (2) horizontal divide-and-conquer of the large state-space (via clustering among autonomous agents a set of shops of a multi-tenant e-commerce platform). This way we aimed to isolate the effects of distribution shifts, while providing enough time for the system to counteract further losses of prediction accuracy. For that, we relied on transfer learning among agents and distributing training and execution of shops per agent clusters.
[Results] We evaluated the approach via a series of controlled experiments on the effect of perturbations across levels of failure pattern complexities and the ratio of shops per agent. The results showed that we could achieve certain robustness guarantees with respect to (1) the convergence towards a minimal number of fixes per failure pattern (successful repair actions), (2) the convergence of the average regret (1 - predicted probability of correct action), and (3) the stability of the corresponding policies (constrained outliers, i.e., number of incorrect actions after point of convergence).