1.
Torkura, K.A., Cheng, F., Meinel, C.: A Proposed Framework For Proactive Vulnerability Assessments in Cloud Deployments. Proceedings of the 10th International Conference for Internet Technology and Secured Transactions (ICITST2015). IEEE (2015).
Vulnerability scanners are deployed in computer networks and software to timely identify security flaws and misconfigurations. However, cloud computing has introduced new attack vectors that requires commensurate change of vulnerability assessment strategies. To investigate the effectiveness of these scanners in cloud environments, we first conduct a quantitative security assessment of OpenStack’s vulnerability lifecycle and discover severe risk levels resulting from prolonged patch release duration. More specifically, there are long time lags between OpenStack patch releases and patch inclusion in vulnerability scanning engines. This scenario introduces sufficient time for malicious actions and creation of exploits such as zero-days. Mitigating these concern requires systems with current knowledge on events within the vulnerability lifecycle. However, current vulnerability scanners are designed to depend on information about publicly announced vulnerabilities which mostly includes only vulnerability disclosure dates. Accordingly, we propose a framework that would mitigate these risks by gathering and correlating information from several security information sources including exploit databases, malware signature repositories and Bug Tracking Systems. The information is thereafter used to automatically generate plugins armed with current information about zero-day exploits and unknown vulnerabilities. We have characterized two new security metrics to describe the discovered risks
Weitere Informationen
AbstractVulnerability scanners are deployed in computer networks and software to timely identify security flaws and misconfigurations. However, cloud computing has introduced new attack vectors that requires commensurate change of vulnerability assessment strategies. To investigate the effectiveness of these scanners in cloud environments, we first conduct a quantitative security assessment of OpenStack’s vulnerability lifecycle and discover severe risk levels resulting from prolonged patch release duration. More specifically, there are long time lags between OpenStack patch releases and patch inclusion in vulnerability scanning engines. This scenario introduces sufficient time for malicious actions and creation of exploits such as zero-days. Mitigating these concern requires systems with current knowledge on events within the vulnerability lifecycle. However, current vulnerability scanners are designed to depend on information about publicly announced vulnerabilities which mostly includes only vulnerability disclosure dates. Accordingly, we propose a framework that would mitigate these risks by gathering and correlating information from several security information sources including exploit databases, malware signature repositories and Bug Tracking Systems. The information is thereafter used to automatically generate plugins armed with current information about zero-day exploits and unknown vulnerabilities. We have characterized two new security metrics to describe the discovered risks
2.
Cheng, F., Sapegin, A., Gawron, M., Meinel, C.: Analyzing Boundary Device Logs on the In-Memory Platform. Proceedings of the IEEE International Symposium on Big Data Security on Cloud (BigDataSecurity‘15). IEEE (2015).
The boundary devices, such as routers, firewalls, proxies, and domain controllers, etc., are continuously generating logs showing the behaviors of the internal and external users, the working state of the network as well as the devices themselves. To rapidly and efficiently analyze these logs makes great sense in terms of security and reliability. However, it is a challenging task due to the fact that a huge amount of data might be generated for being analyzed in very short time. In this paper, we address this challenge by applying complex analytics and modern in-memory database technology on the large amount of log data. Logs from different kinds of devices are collected, normalized, and stored in the In-Memory database. Machine learning approaches are then implemented to analyze the centralized big data to identify attacks and anomalies which are not easy to be detected from the individual log event. The proposed method is implemented on the In-Memory platform, i.e., SAP HANA Platform, and the experimental results show that it has the expected capabilities as well as the high performance.
Weitere Informationen
AbstractThe boundary devices, such as routers, firewalls, proxies, and domain controllers, etc., are continuously generating logs showing the behaviors of the internal and external users, the working state of the network as well as the devices themselves. To rapidly and efficiently analyze these logs makes great sense in terms of security and reliability. However, it is a challenging task due to the fact that a huge amount of data might be generated for being analyzed in very short time. In this paper, we address this challenge by applying complex analytics and modern in-memory database technology on the large amount of log data. Logs from different kinds of devices are collected, normalized, and stored in the In-Memory database. Machine learning approaches are then implemented to analyze the centralized big data to identify attacks and anomalies which are not easy to be detected from the individual log event. The proposed method is implemented on the In-Memory platform, i.e., SAP HANA Platform, and the experimental results show that it has the expected capabilities as well as the high performance.
3.
Torkura, K.A., Cheng, F., Meinel, C.: Application of Quantitative Security Metrics In Cloud Computing. Proceedings of the 10th International Conference for Internet Technology and Secured Transactions (ICITST2015). IEEE (2015).
Security issues are still prevalent in cloud computing particularly public cloud. Efforts by Cloud Service Providers to secure out-sourced resources are not sufficient to gain trust from customers. Service Level Agreements (SLAs) are currently used to guarantee security and privacy, however research into SLAs monitoring suggests levels of dissatisfaction from cloud users. Accordingly, enterprises favor private clouds such as OpenStack as they offer more control and security visibility. However, private clouds do not provide absolute security, they share some security challenges with public clouds and eliminate other challenges. Security metrics based approaches such as quantitative security assessments could be adopted to quantify security value of private and public clouds. Software quantitative security assessments provide extensive visibility into security postures and help assess whether or not security has improved or deteriorated. In this paper we focus on private cloud security using OpenStack as a case study, we conduct a quantitative assessment of OpenStack based on empirical data. Our analysis is multi-faceted, covering OpenStack major releases and services. We employ security metrics to determine the vulnerability density, vulnerability severity metrics and patching behavior. We show that OpenStack’s security has improved since inception, however concerted efforts are imperative for secure deployments, particularly in production environments.
Weitere Informationen
AbstractSecurity issues are still prevalent in cloud computing particularly public cloud. Efforts by Cloud Service Providers to secure out-sourced resources are not sufficient to gain trust from customers. Service Level Agreements (SLAs) are currently used to guarantee security and privacy, however research into SLAs monitoring suggests levels of dissatisfaction from cloud users. Accordingly, enterprises favor private clouds such as OpenStack as they offer more control and security visibility. However, private clouds do not provide absolute security, they share some security challenges with public clouds and eliminate other challenges. Security metrics based approaches such as quantitative security assessments could be adopted to quantify security value of private and public clouds. Software quantitative security assessments provide extensive visibility into security postures and help assess whether or not security has improved or deteriorated. In this paper we focus on private cloud security using OpenStack as a case study, we conduct a quantitative assessment of OpenStack based on empirical data. Our analysis is multi-faceted, covering OpenStack major releases and services. We employ security metrics to determine the vulnerability density, vulnerability severity metrics and patching behavior. We show that OpenStack’s security has improved since inception, however concerted efforts are imperative for secure deployments, particularly in production environments.
4.
Gawron, M., Cheng, F., Meinel, C.: Automatic Detection of Vulnerabilities for Advanced Security Analytics. Proceedings of the 17th Asia-Pacific Network Operations and Management Symposium (APNOMS’15). pp. 471–474. IEEE (2015).
The detection of vulnerabilities in computer systems and computer networks as well as the weakness analysis are crucial problems. The presented method tackles the problem with an automated detection. For identifying vulnerabilities the approach uses a logical representation of preconditions and postconditions of vulnerabilities. The conditional structure simulates requirements and impacts of each vulnerability. Thus an automated analytical function could detect security leaks on a target system based on this logical format. With this method it is possible to scan a system without much expertise, since the automated or computer-aided vulnerability detection does not require special knowledge about the target system. The gathered information is used to provide security advisories and enhanced diagnostics which could also detect attacks that exploit multiple vulnerabilities of the system.
Weitere Informationen
AbstractThe detection of vulnerabilities in computer systems and computer networks as well as the weakness analysis are crucial problems. The presented method tackles the problem with an automated detection. For identifying vulnerabilities the approach uses a logical representation of preconditions and postconditions of vulnerabilities. The conditional structure simulates requirements and impacts of each vulnerability. Thus an automated analytical function could detect security leaks on a target system based on this logical format. With this method it is possible to scan a system without much expertise, since the automated or computer-aided vulnerability detection does not require special knowledge about the target system. The gathered information is used to provide security advisories and enhanced diagnostics which could also detect attacks that exploit multiple vulnerabilities of the system.
5.
Krentz, K.-F., Meinel, C.: Handling Reboots and Mobility in 802.15.4 Security. Proceedings of the 31st Annual Computer Security Applications Conference (ACSAC 2015). ACM, Los Angeles, CA, USA (2015).
6.
Ussath, M., Cheng, F., Meinel, C.: Concept for a Security Investigation Framework. Proceedings of the 7th IFIP International Conference on New Technologies, Mobility, and Security (NTMS’15) (2015).
7.
Torkura, K.A., Cheng, F., Meinel, C.: Aggregating Vulnerability Information for Proactive Cloud Vulnerability Assessment. Journal of Internet Technology and Secured Transactions. 4, (2015).
The current increase in software vulnerabilities necessitates concerted research in vulnerability lifecycles and how effective mitigative approaches could be implemented. This is especially imperative in cloud infrastructures considering the novel attack vectors introduced by this emerging computing paradigm. By conducting a quantitative security assessment of OpenStack’s vulnerability lifecycle, we discovered severe risk levels resulting from prolonged gap between vulnerability discovery and patch release. We also observed an additional time lag between patch release and patch inclusion in vulnerability scanning engines. This scenario introduces sufficient time for malicious actors to develop zero-days exploits and other types of malicious software. Mitigating these concerns requires systems with current knowledge on events within the vulnerability lifecycle. However, current threat mitigation systems like vulnerability scanners are designed to depend on information from public vulnerability repositories which mostly do not retain comprehensive information on vulnerabilities. Accordingly, we propose a framework that would mitigate the afore-mentioned risks by gathering and correlating information from several security information sources including exploit databases, malware signature repositories, Bug Tracking Systems and other channels. These information is thereafter used to automatically generate plugins armed with current information about possible zeroday exploits and other unknown vulnerabilities. We have characterized two new security metrics to describe the discovered risks, Scanner Patch Time and Scanner Patch Discovery Time
Weitere Informationen
AbstractThe current increase in software vulnerabilities necessitates concerted research in vulnerability lifecycles and how effective mitigative approaches could be implemented. This is especially imperative in cloud infrastructures considering the novel attack vectors introduced by this emerging computing paradigm. By conducting a quantitative security assessment of OpenStack’s vulnerability lifecycle, we discovered severe risk levels resulting from prolonged gap between vulnerability discovery and patch release. We also observed an additional time lag between patch release and patch inclusion in vulnerability scanning engines. This scenario introduces sufficient time for malicious actors to develop zero-days exploits and other types of malicious software. Mitigating these concerns requires systems with current knowledge on events within the vulnerability lifecycle. However, current threat mitigation systems like vulnerability scanners are designed to depend on information from public vulnerability repositories which mostly do not retain comprehensive information on vulnerabilities. Accordingly, we propose a framework that would mitigate the afore-mentioned risks by gathering and correlating information from several security information sources including exploit databases, malware signature repositories, Bug Tracking Systems and other channels. These information is thereafter used to automatically generate plugins armed with current information about possible zeroday exploits and other unknown vulnerabilities. We have characterized two new security metrics to describe the discovered risks, Scanner Patch Time and Scanner Patch Discovery Time
8.
Gawron, M., Cheng, F., Meinel, C.: Automatic Vulnerability Detection for Weakness Visualization and Advisory Creation. Proceedings of the 8th International Conference on Security of Information and Networks (SIN’15). pp. 229–236. ACM Press (2015).
The detection of vulnerabilities in computer systems and computer networks as well as the representation of the results are crucial problems. The presented method tackles the problem with an automated detection and an intuitive representation. For detecting vulnerabilities the approach uses a logical representation of preconditions and postconditions of vulnerabilities. Thus an automated analytical function could detect security leaks on a target system. The gathered information is used to provide security advisories and enhanced diagnostics for the system. Additionally the conditional structure allows us to create attack graphs to visualize the network structure and the integrated vulnerability information. Finally we propose methods to resolve the identified weaknesses whether to remove or update vulnerable applications and secure the target system. This advisories are created automatically and provide possible solutions for the security risks.
Weitere Informationen
AbstractThe detection of vulnerabilities in computer systems and computer networks as well as the representation of the results are crucial problems. The presented method tackles the problem with an automated detection and an intuitive representation. For detecting vulnerabilities the approach uses a logical representation of preconditions and postconditions of vulnerabilities. Thus an automated analytical function could detect security leaks on a target system. The gathered information is used to provide security advisories and enhanced diagnostics for the system. Additionally the conditional structure allows us to create attack graphs to visualize the network structure and the integrated vulnerability information. Finally we propose methods to resolve the identified weaknesses whether to remove or update vulnerable applications and secure the target system. This advisories are created automatically and provide possible solutions for the security risks.
9.
Amirkhanyan, A., Sapegin, A., Cheng, F., Meinel, C.: Simulation User Behavior on A Security Testbed Using User Behavior States Graph. Proceedings of the 8th International Conference on Security of Information and Networks (SIN’15). pp. 217–223. ACM Press (2015).
For testing new methods of network security or new algorithms of security analytics, we need the experimental environments as well as the testing data which are much as possible similar to the real-world data. Therefore, the researchers are always trying to find the best approaches and recommendations of creating and simulating testbeds, because the issue of automation of the testbed creation is a crucial goal to accelerate research progress. One of the ways to generate data is simulate the user behavior on the virtual machines, but the challenge is how to describe what we want to simulate. In this paper, we present a new approach of describing user behavior for the simulation tool. This approach meets requirements of simplicity and extensibility. And it could be used for generating user behavior scenarios to simulate them on Windows-family virtual machines. The proposed approached is applied to our developed simulation tool that we use for solving a problem of the lack of data for research in network security and security analytics areas by generating log dataset that could be used for testing new methods of network security and new algorithms of security analytics.
Weitere Informationen
AbstractFor testing new methods of network security or new algorithms of security analytics, we need the experimental environments as well as the testing data which are much as possible similar to the real-world data. Therefore, the researchers are always trying to find the best approaches and recommendations of creating and simulating testbeds, because the issue of automation of the testbed creation is a crucial goal to accelerate research progress. One of the ways to generate data is simulate the user behavior on the virtual machines, but the challenge is how to describe what we want to simulate. In this paper, we present a new approach of describing user behavior for the simulation tool. This approach meets requirements of simplicity and extensibility. And it could be used for generating user behavior scenarios to simulate them on Windows-family virtual machines. The proposed approached is applied to our developed simulation tool that we use for solving a problem of the lack of data for research in network security and security analytics areas by generating log dataset that could be used for testing new methods of network security and new algorithms of security analytics.
10.
Azodi, A., Gawron, M., Sapegin, A., Cheng, F., Meinel., C.: Leveraging Event Structure for Adaptive Machine Learning on Big Data Landscapes. Proceedings of the International Conference on Mobile, Secure and Programmable Networking (MSPN’15). Springer (2015).
Modern machine learning techniques have been applied to many aspects of network analytics in order to discover patterns that can clarify or better demonstrate the behavior of users and systems within a given network. Often the information to be processed has to be converted to a different type in order for machine learning algorithms to be able to process them. To accurately process the information generated by systems within a network, the true intention and meaning behind the information must be observed. In this paper we propose different approaches for mapping network information such as IP addresses to integer values that attempts to keep the relation present in the original format of the information intact. With one exception, all of the proposed mappings result in (at most) 64 bit long outputs in order to allow atomic operations using CPUs with 64 bit registers. The mapping output size is restricted in the interest of performance. Additionally we demonstrate the benefits of the new mappings for one specific machine learning algorithm (k-means) and compare the algorithm's results for datasets with and without the proposed transformations.
Weitere Informationen
AbstractModern machine learning techniques have been applied to many aspects of network analytics in order to discover patterns that can clarify or better demonstrate the behavior of users and systems within a given network. Often the information to be processed has to be converted to a different type in order for machine learning algorithms to be able to process them. To accurately process the information generated by systems within a network, the true intention and meaning behind the information must be observed. In this paper we propose different approaches for mapping network information such as IP addresses to integer values that attempts to keep the relation present in the original format of the information intact. With one exception, all of the proposed mappings result in (at most) 64 bit long outputs in order to allow atomic operations using CPUs with 64 bit registers. The mapping output size is restricted in the interest of performance. Additionally we demonstrate the benefits of the new mappings for one specific machine learning algorithm (k-means) and compare the algorithm's results for datasets with and without the proposed transformations.
11.
Sapegin, A., Gawron, M., Jaeger, D., Cheng, F., Meinel, C.: High-speed Security Analytics Powered by In-memory Machine Learning Engine. Proceedings of the 14th International Symposium on Parallel and Distributed Computing (ISPDC 2015). pp. 74–81. IEEE (2015).
Modern Security Information and Event Management systems should be capable to store and process high amount of events or log messages in different formats and from different sources. This requirement often prevents such systems from usage of computational-heavy algorithms for security analysis. To deal with this issue, we built our system based on an in-memory data base with an integrated machine learning library, namely SAP HANA. Three approaches, i.e. (1) deep normalisation of log messages (2) storing data in the main memory and (3) running data analysis directly in the database, allow us to increase processing speed in such a way, that machine learning analysis of security events becomes possible nearly in real-time. To prove our concepts, we measured the processing speed for the developed system on the data generated using Active Directory tested and showed the efficiency of our approach for high-speed analysis of security events.
Weitere Informationen
AbstractModern Security Information and Event Management systems should be capable to store and process high amount of events or log messages in different formats and from different sources. This requirement often prevents such systems from usage of computational-heavy algorithms for security analysis. To deal with this issue, we built our system based on an in-memory data base with an integrated machine learning library, namely SAP HANA. Three approaches, i.e. (1) deep normalisation of log messages (2) storing data in the main memory and (3) running data analysis directly in the database, allow us to increase processing speed in such a way, that machine learning analysis of security events becomes possible nearly in real-time. To prove our concepts, we measured the processing speed for the developed system on the data generated using Active Directory tested and showed the efficiency of our approach for high-speed analysis of security events.
12.
Sapegin, A., Amirkhanyan, A., Gawron, M., Cheng, F., Meinel, C.: Poisson-based Anomaly Detection for Identifying Malicious User Behaviour. Proceedings of the International Conference on Mobile, Secure and Programmable Networking (MSPN’15). Springer (2015).
Nowadays, malicious user behaviour that does not trigger access violation or alert of data leak is difficult to be detected. Using the stolen login credentials the intruder doing espionage will first try to stay undetected: silently collect data from the company network and use only resources he is authorised to access. To deal with such cases, a Poisson-based anomaly detection algorithm is proposed in this paper. Two extra measures make it possible to achieve high detection rates and meanwhile reduce number of false positive alerts: (1) checking probability first for the group, and then for single users and (2) selecting threshold automatically. To prove the proposed approach, we developed a special simulation testbed that emulates user behaviour in the virtual network environment. The proof-of-concept implementation has been integrated into our prototype of a SIEM system — Real-time Event Analysis and Monitoring System, where the emulated Active Directory logs from Microsoft Windows domain are extracted and normalised into Object Log Format for further processing and anomaly detection. The experimental results show that our algorithm was able to detect all events related to malicious activity and produced zero false positive results. Forethought as the module for our self-developed SIEM system based on the SAP HANA in-memory database, our solution is capable of processing high volumes of data and shows high efficiency on experimental dataset.
Weitere Informationen
AbstractNowadays, malicious user behaviour that does not trigger access violation or alert of data leak is difficult to be detected. Using the stolen login credentials the intruder doing espionage will first try to stay undetected: silently collect data from the company network and use only resources he is authorised to access. To deal with such cases, a Poisson-based anomaly detection algorithm is proposed in this paper. Two extra measures make it possible to achieve high detection rates and meanwhile reduce number of false positive alerts: (1) checking probability first for the group, and then for single users and (2) selecting threshold automatically. To prove the proposed approach, we developed a special simulation testbed that emulates user behaviour in the virtual network environment. The proof-of-concept implementation has been integrated into our prototype of a SIEM system — Real-time Event Analysis and Monitoring System, where the emulated Active Directory logs from Microsoft Windows domain are extracted and normalised into Object Log Format for further processing and anomaly detection. The experimental results show that our algorithm was able to detect all events related to malicious activity and produced zero false positive results. Forethought as the module for our self-developed SIEM system based on the SAP HANA in-memory database, our solution is capable of processing high volumes of data and shows high efficiency on experimental dataset.
13.
Torkura, K.A., Meinel, C.: Towards Cloud-Aware Vulnerability Assessments. Proceedings of the 11th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS2015). IEEE (2015).
Vulnerability assessments are best practices for computer security and requirements for regulatory compliance. Potential and existing security holes can be identified during vulnerability assessments and security breaches could be averted. However, the unique nature of cloud computing environments requires more dynamic assessment techniques. The proliferation of cloud services and cloud-aware applications introduce more cloud vulnerabilities. But, current measures for identification, mitigation and prevention of cloud vulnerabilities do not suffice. Our investigations indicate a possible reason for this inefficiency to lapses in availability of precise, cloud vulnerability information. We observed also that most research efforts in the context of cloud vulnerability concentrate on IaaS, leaving other cloud models largely unattended. Similarly, most cloud assessment efforts tackle general cloud vulnerabilities rather than cloud specific vulnerabilities. Yet, mitigating cloud specific vulnerabilities is important for cloud security. Hence, this paper proposes a new approach that addresses the mentioned issues by monitoring, acquiring and adapting publicly available cloud vulnerability information for effective vulnerability assessments. We correlate vulnerability information from public vulnerability databases and develop Network Vulnerability Tests for specific cloud vulnerabilities. We have implemented, evaluated and verified the suitability of our approach.
Weitere Informationen
AbstractVulnerability assessments are best practices for computer security and requirements for regulatory compliance. Potential and existing security holes can be identified during vulnerability assessments and security breaches could be averted. However, the unique nature of cloud computing environments requires more dynamic assessment techniques. The proliferation of cloud services and cloud-aware applications introduce more cloud vulnerabilities. But, current measures for identification, mitigation and prevention of cloud vulnerabilities do not suffice. Our investigations indicate a possible reason for this inefficiency to lapses in availability of precise, cloud vulnerability information. We observed also that most research efforts in the context of cloud vulnerability concentrate on IaaS, leaving other cloud models largely unattended. Similarly, most cloud assessment efforts tackle general cloud vulnerabilities rather than cloud specific vulnerabilities. Yet, mitigating cloud specific vulnerabilities is important for cloud security. Hence, this paper proposes a new approach that addresses the mentioned issues by monitoring, acquiring and adapting publicly available cloud vulnerability information for effective vulnerability assessments. We correlate vulnerability information from public vulnerability databases and develop Network Vulnerability Tests for specific cloud vulnerabilities. We have implemented, evaluated and verified the suitability of our approach.
14.
Jaeger, D., Azodi, A., Cheng, F., Meinel, C.: Normalizing Security Events with a Hierarchical Knowledge Base. Proceedings of the 9th International Conference on Information Security Theory and Practice (WISTP’15). pp. 237–248. Springer Internation Publishing (2015).
An important technique for attack detection in complex company networks is the analysis of log data from various network components. As networks are growing, the number of produced log events increases dramatically, sometimes even to multiple billion events per day. The analysis of such big data highly relies on a full normalization of the log data in realtime. Until now, the important issue of full normalization of a large number of log events is only insufficiently handled by many software solutions and not well covered in existing research work. In this paper, we propose and evaluate multiple approaches for handling the normalization of a large number of typical logs better and more efficient. The main idea is to organize the normalization in multiple levels by using a hierarchical knowledge base (KB) of normalization rules. In the end, we achieve a performance gain of about 1000x with our presented approaches, in comparison to a naive approach typically used in existing normalization solutions. Considering this improvement, big log data can now be handled much faster and can be used to find and mitigate attacks in realtime.
Weitere Informationen
AbstractAn important technique for attack detection in complex company networks is the analysis of log data from various network components. As networks are growing, the number of produced log events increases dramatically, sometimes even to multiple billion events per day. The analysis of such big data highly relies on a full normalization of the log data in realtime. Until now, the important issue of full normalization of a large number of log events is only insufficiently handled by many software solutions and not well covered in existing research work. In this paper, we propose and evaluate multiple approaches for handling the normalization of a large number of typical logs better and more efficient. The main idea is to organize the normalization in multiple levels by using a hierarchical knowledge base (KB) of normalization rules. In the end, we achieve a performance gain of about 1000x with our presented approaches, in comparison to a naive approach typically used in existing normalization solutions. Considering this improvement, big log data can now be handled much faster and can be used to find and mitigate attacks in realtime.
15.
Jaeger, D., Azodi, A., Cheng, F., Meinel, C.: Normalizing Security Events with a Hierarchical Knowledge Base. Proceedings of the 9th WISTP International Conference on Information Security Theory and Practice (WISTP’15) (2015).
An important technique for attack detection in complex company networks is the analysis of log data from various network components. As networks are growing, the number of produced log events increases dramatically, sometimes even to multiple billion events per day. The analysis of such big data highly relies on a full normalization of the log data in realtime. Until now, the important issue of full normalization of a large number of log events is only insufficiently handled by many software solutions and not well covered in existing research work. In this paper, we propose and evaluate multiple approaches for handling the normalization of a large number of typical logs better and more efficient. The main idea is to organize the normalization in multiple levels by using a hierarchical knowledge base (KB) of normalization rules. In the end, we achieve a performance gain of about 1000x with our presented approaches, in comparison to a naive approach typically used in existing normalization solutions. Considering this improvement, big log data can now be handled much faster and can be used to find and mitigate attacks in realtime.
Weitere Informationen
AbstractAn important technique for attack detection in complex company networks is the analysis of log data from various network components. As networks are growing, the number of produced log events increases dramatically, sometimes even to multiple billion events per day. The analysis of such big data highly relies on a full normalization of the log data in realtime. Until now, the important issue of full normalization of a large number of log events is only insufficiently handled by many software solutions and not well covered in existing research work. In this paper, we propose and evaluate multiple approaches for handling the normalization of a large number of typical logs better and more efficient. The main idea is to organize the normalization in multiple levels by using a hierarchical knowledge base (KB) of normalization rules. In the end, we achieve a performance gain of about 1000x with our presented approaches, in comparison to a naive approach typically used in existing normalization solutions. Considering this improvement, big log data can now be handled much faster and can be used to find and mitigate attacks in realtime.
16.
Amirkhanyan, A., Cheng, F., Meinel, C.: Real-Time Clustering of Massive Geodata for Online Maps to Improve Visual Analysis. Proceedings of the 11th International Conference on Innovations in Information Technology (IIT2015). IEEE, Dubai, UAE (2015).
Nowadays, we have a lot of data produced by social media services, but more and more often these data contain information about a location that gives us the wide range of possibilities to analyze them. Since we can be interested not only in the content, but also in the location where this content was produced. For good analyzing geo-spatial data, we need to find the best approaches for geo clustering. And the best approach means real-time clustering of massive geodata with high accuracy. In this paper, we present a new approach of clustering geodata for online maps, such as Google Maps, OpenStreetMap and others. Clustered geodata based on their location improve visual analysis of them and improve situational awareness. Our approach is the server-side online algorithm that does not need the entire data to start clustering. Also, this approach works in real-time and could be used for clustering of massive geodata for online maps in reasonable time. We implemented the proposed approach to prove the concept, and also, we provided experiments and evaluation of our approach.
Weitere Informationen
AbstractNowadays, we have a lot of data produced by social media services, but more and more often these data contain information about a location that gives us the wide range of possibilities to analyze them. Since we can be interested not only in the content, but also in the location where this content was produced. For good analyzing geo-spatial data, we need to find the best approaches for geo clustering. And the best approach means real-time clustering of massive geodata with high accuracy. In this paper, we present a new approach of clustering geodata for online maps, such as Google Maps, OpenStreetMap and others. Clustered geodata based on their location improve visual analysis of them and improve situational awareness. Our approach is the server-side online algorithm that does not need the entire data to start clustering. Also, this approach works in real-time and could be used for clustering of massive geodata for online maps in reasonable time. We implemented the proposed approach to prove the concept, and also, we provided experiments and evaluation of our approach.
17.
Sianipar, J.H., Meinel, C.: A verification mechanism for cloud brokerage system. Proceedings of the Second International Conference on Computer Science, Computer Engineering, and Social Media (CSCESM2015). pp. 143–148. IEEE Press, Lodz, Poland (2015).
In the existing cloud brokerage system, the client does not have the ability to verify the result of the cloud service selection. There are possibilities that the cloud broker can be biased in selecting the best Cloud Service Provider (CSP) for a client. A compromised or dishonest cloud broker can unfairly select a CSP for its own advantage by cooperating with the selected CSP. To address this problem, we propose a mechanism to verify the CSP selection result of the cloud broker. In this verification mechanism, properties of every CSP will also be verified. It uses a trusted third party to gather clustering result from the cloud broker. This trusted third party is also used as a base station to collect CSP properties in a multi-agents system. Software Agents are installed and running on every CSP. The CSP is monitored by agents as the representative of the customer inside the cloud. These multi-agents give reports to a third party that must be trusted by CSPs, customers and the Cloud Broker. The third party provides transparency by publishing reports to the authorized parties (CSPs and Customers).
Weitere Informationen
AbstractIn the existing cloud brokerage system, the client does not have the ability to verify the result of the cloud service selection. There are possibilities that the cloud broker can be biased in selecting the best Cloud Service Provider (CSP) for a client. A compromised or dishonest cloud broker can unfairly select a CSP for its own advantage by cooperating with the selected CSP. To address this problem, we propose a mechanism to verify the CSP selection result of the cloud broker. In this verification mechanism, properties of every CSP will also be verified. It uses a trusted third party to gather clustering result from the cloud broker. This trusted third party is also used as a base station to collect CSP properties in a multi-agents system. Software Agents are installed and running on every CSP. The CSP is monitored by agents as the representative of the customer inside the cloud. These multi-agents give reports to a third party that must be trusted by CSPs, customers and the Cloud Broker. The third party provides transparency by publishing reports to the authorized parties (CSPs and Customers).
18.
Sianipar, J.H., Meinel, C.: A verification mechanism for cloud brokerage system. Proceedings of the Second International Conference on Computer Science, Computer Engineering, and Social Media (CSCESM2015). pp. 143–148. IEEE Press, Lodz, Poland (2015).
In the existing cloud brokerage system, the client does not have the ability to verify the result of the cloud service selection. There are possibilities that the cloud broker can be biased in selecting the best Cloud Service Provider (CSP) for a client. A compromised or dishonest cloud broker can unfairly select a CSP for its own advantage by cooperating with the selected CSP. To address this problem, we propose a mechanism to verify the CSP selection result of the cloud broker. In this verification mechanism, properties of every CSP will also be verified. It uses a trusted third party to gather clustering result from the cloud broker. This trusted third party is also used as a base station to collect CSP properties in a multi-agents system. Software Agents are installed and running on every CSP. The CSP is monitored by agents as the representative of the customer inside the cloud. These multi-agents give reports to a third party that must be trusted by CSPs, customers and the Cloud Broker. The third party provides transparency by publishing reports to the authorized parties (CSPs and Customers).
Weitere Informationen
AbstractIn the existing cloud brokerage system, the client does not have the ability to verify the result of the cloud service selection. There are possibilities that the cloud broker can be biased in selecting the best Cloud Service Provider (CSP) for a client. A compromised or dishonest cloud broker can unfairly select a CSP for its own advantage by cooperating with the selected CSP. To address this problem, we propose a mechanism to verify the CSP selection result of the cloud broker. In this verification mechanism, properties of every CSP will also be verified. It uses a trusted third party to gather clustering result from the cloud broker. This trusted third party is also used as a base station to collect CSP properties in a multi-agents system. Software Agents are installed and running on every CSP. The CSP is monitored by agents as the representative of the customer inside the cloud. These multi-agents give reports to a third party that must be trusted by CSPs, customers and the Cloud Broker. The third party provides transparency by publishing reports to the authorized parties (CSPs and Customers).
19.
Elsaid, M.E., Meinel, C.: Friendship based Storage Allocation for Online Social Networks Cloud Computing. Proceedings of the International Conference of Cloud Computing Technologies and Applications (CloudTech 2015). IEEE Press, Marrakesh, Morrocco (2015).
20.
Azodi, A., Jaeger, D., Cheng, F., Meinel, C.: Passive Network Monitoring using REAMS. Proceedings of the 6th International Conference on Information Science and Applications (ICISA 2015). pp. 205–215. Sprinter, Pattaya, Thailand (2015).