1.
Abuelgasim, A., Kayem, A.V.: An Approach to Personalized Privacy Recommendations on Online Social Networks. In Proceedings of the 2nd International Conference on Information Systems Security and Privacy (ICISSP 2016), Rome, Italy - Feb. 19-21, 2016. pp. 126–137 (2016).
Most Online Social Networks (OSNs) implement privacy policies that enable users to protect their sensitive information against privacy violations. However, observations indicate that users find these privacy policies cumbersome and difficult to configure. Consequently, various approaches have been proposed to assist users with privacy policy configuration. These approaches are however, limited to either protecting only profile attributes, or only protecting user-generated content. This is problematic, because both profile attributes and user-generated content can contain sensitive information. Therefore, protecting one without the other, can still result in privacy violations. A further drawback of existing approaches is that most require considerable user input which is time consuming and inefficient in terms of privacy policy configuration. In order to address these problems, we propose an automated privacy policy recommender system. The system relies on the expertise of existing OSN users, in addition to the target user's privacy policy history to provide him/her with personalized privacy policy suggestions for profile attributes, as well as user-generated content. Results from our prototype implementation indicate that the proposed recommender system provides accurate privacy policy suggestions, with minimum user input.
2.
Ambassa, P.L., Wolthusen, S.D., Kayem, A.V., Meinel, C.: Physical Attestation and Authentication to Detect Cheating in Resource Constrained Smart Microgrids. In Proceedings, 2nd Workshop on the Security of Industrial Control Systems and Cyber-Physical Systems (CyberICPS 2016), September 26-30, 2016, Heraklion, Greece (2016).
We present a physical attestation and authentication approach to detecting cheating in resource constrained smart micro-grids. A multi-user smart microgrid (SMG) architecture supported by a low cost and unreliable communications network, forms our application scenario. In this scenario, a malicious adversary can cheat by manipulating the measured power consumption/generation data. In doing so, the reward is access to more than the per user allocated power quota. Cheating discourages user participation and results in grid destabilisation and a breakdown of the grid in the worst case. Detecting cheating attacks is thus essential for secure and resilient SMG but also a challenging problem.We develop a cheating detection scheme that integrates the idea of physical attestation to assess whether the SMG system is under attack. Subsequently, we support our scheme with an authentication mechanism based on control signals to uniquely identify node subversion. A theoretical analysis demonstrates the efficiency and correctness of our proposed scheme for constrained SMGs.
3.
Perlich, A., Meinel, C.: Patient-provider teamwork via cooperative note taking on Tele-Board MED. Exploring Complexity in Health: An Interdisciplinary Systems Approach (Proceedings of MIE2016 at HEC2016) (2016).
There is significant, unexploited potential to improve the patients’ engagement in psychotherapy treatment through technology use. We develop Tele-Board MED (TBM), a digital tool to support documentation and patient-provider collaboration in medical encounters. Our objective is the evaluation of TBM's practical effects on patient-provider relationships and patient empowerment in the domain of talk-based mental health interventions. We tested TBM in individual therapy sessions at a psychiatric ward using action research methods. The qualitative results in form of therapist observations and patient stories show an increased acceptance of diagnoses and patient-therapist bonding. We compare the observed effects to patient-provider relationship and patient empowerment models. We can conclude that the functions of TBM – namely that notes are shared and cooperatively taken with the patient, that diagnostics and treatment procedures are depicted via visuals and in plain language, and that patients get a copy of their file – lead to increased patient engagement and an improved collaboration, communication and integration in consultations.
4.
Hemati, H.R., Mohammad Ghasemzadeh, C.M.: A Hybrid Machine Learning Method for Intrusion Detection. International Journal of Engineering (IJE). pp. 1242–1246. IJE (2016).
Data security is an important area of concern for every computer system owner. An intrusion detection system is a device or software application that monitors a network or systems for malicious activity or policy violations. Already various techniques of artificial intelligence have been used for intrusion detection. The main challenge in this area is the running speed of the available implementations. In this research work, we present a hybrid approach which is based on the “linear discernment analysis” and the “extreme learning machine” to build a tool for intrusion detection. In the proposed method, the linear discernment analysis is used to reduce the dimensions of data and the extreme learning machine neural network is used for data classification. This idea allowed us to benefit from the advantages of both methods. We implemented the proposed method on a microcomputer with core i5 1.6 GHz processor by using machine learning toolbox. In order to evaluate the performance of the proposed method, we run it on a comprehensive data set concerning intrusion detection. The data set is called KDD, which is a version of the data set DARPA presented by MIT Lincoln Labs. The experimental results were organized in related tables and charts. Analysis of the results show meaningful improvements in intrusion detection. In general, compared to the existing methods, the proposed approach works faster with higher accuracy.
5.
Marufu, A., Kayem, A.V., D.Wolthusen, S.: Power Auctioning in Resource Constrained Micro-Grids: Cases of Cheating. In Proceedings, 11th International Conference on Critical Information Infrastructures Security (CRITIS 2016), October 10-12, 2016, Paris, France, Lecture Notes in Computer Science, Springer-Verlag (2016).
In this paper, we consider the Continuous Double Auction (CDA) scheme as a comprehensive power resource allocation approach on micro-grids. Users of CDA schemes are typically self-interested and so work to maximize self-profit. Meanwhile, security in CDAs has received limited attention, with little to no theoretical or experimental evidence demonstrating how an adversary cheats to gain excess energy or derive economic benefits. We identify two forms of cheating realised by changing the trading agent (TA) strategy of some of the agents in a homogeneous CDA scheme. In one case an adversary gains control and degrades other trading agents' strategies to gain more surplus. While in the other, K colluding trading agents employ an automated coordinated approach to changing their TA strategies to maximize surplus power gains. We propose an exception handling mechanism that makes use of allocative efficiency and message overheads to detect and mitigate cheating forms.
6.
Yang, H., Wang, C., Bartz, C., Meinel, C.: SceneTextReg: A Real-Time Video OCR System. Proceedings of the 2016 ACM on Multimedia Conference. pp. 698–700. ACM, Amsterdam, The Netherlands (2016).
7.
Yang, H., Wang, C., Bartz, C., Meinel, C.: SceneTextReg: a real-time video OCR system. Proceedings of the 24th ACM international conference on Multimedia (2016).
8.
Kayem, A.V., Vester, C., Meinel, C.: Automated k-Anonymization and l-Diversity for Shared Data Privacy. In Proceedings, 27th International Conference on Database and Expert Systems Applications, DEXA 2016, Porto, Portugal, September 5-8, 2016, Part I. pp. 105–120. Springer (2016).
Analyzing data is a cost-intensive process, particularly for organizations lacking the necessary in-house human and computational capital. Data analytics outsourcing offers a cost-effective solution, but data sensitivity and query response time requirements, make data protection a necessary pre-processing step. For performance and privacy reasons, anonymization is preferred over encryption. Yet, manual anonymization is time-intensive and error-prone. Automated anonymization is a better alternative but requires satisfying the conflicting objectives of utility and privacy. In this paper, we present an automated anonymization scheme that extends the standard k-anonymization and l-diversity algorithms to satisfy the dual objectives of data utility and privacy. We use a multi-objective optimization scheme that employs a weighting mechanism, to minimise information loss and maximize privacy. Our results show that automating l-diversity results in an added average information loss of 7 % over automated k-anonymization, but in a diversity of between 9–14 % in comparison to 10–30 % in k-anonymised datasets. The lesson that emerges is that automated l-diversity offers better privacy than k-anonymization and with negligible information loss.
9.
Torkura, K., Meinel, C.: Towards Vulnerability Assessment as a Service in OpenStack Clouds. Proceedings of the 41st IEEE Conference on Local Computer Networks (LCN). IEEE, Dubai, UAE (2016).
Efforts towards improving security in cloud infrastructures recommend regulatory compliance approaches such as HIPAA and PCI DSS. Similarly, vulnerability assessments are imperatives for fulfilling these regulatory compliance requirements. Nevertheless, conducting vulnerability assessments in cloud environments requires approaches different from those found in traditional computing. Factors such as multi-tenancy, elasticity, self-service and cloud-specific vulnerabilities must be considered. Furthermore, the Anything-as-a-Service model of the cloud stimulates security automation and user-intuitive services. In this paper, we tackle the challenge of efficient vulnerability assessments at the system level, in particular for core cloud applications.Within this scope, we focus on the use case of a cloud administrator. We believe the security of the underlying cloud software is crucial to the overall health of a cloud infrastructure since these are the foundations upon which other applications within the cloud function. We demonstrate our approach using OpenStack and through our experiments prove that our prototype implementation is effective at identifying “OpenStacknative” vulnerabilities. We also automate the process of identifying insecure configurations in the cloud and initiate steps for deploying Vulnerability Assessment-as-a-Service in OpenStack.
10.
Wang, C., Yang, H., Bartz, C., Meinel, C.: Image captioning with deep bidirectional LSTMs. Proceedings of the 24th ACM international conference on Multimedia. pp. 988–997 (2016).
11.
Staubitz, T., Teusner, R., Renz, J., Meinel, C.: First Steps in Automated Proctoring. Proceedings of the Fourth MOOC European Stakeholders Summit (EMOOCs 2016). P.A.U (2016).
12.
Renz, J., Navarro-Suarez, G., Sathi, R., Staubitz, T., Meinel, C.: Enabling Schema Agnostic Learning Analytics in a Service-Oriented MOOC Platform. Proceedings of ACM Learning at Scale Conference (L@S2016). ACM (2016).
13.
Renz, J., Hoffmann, D., Staubitz, T., Meinel, C.: Using A/B Testing in MOOC Environments. Proceedings of the 6th International Conference on Learning Analytics and Knowledge (LAK2016). SOLAR (2016).
14.
Sianipar, J., Willems, C., Meinel, C.: A Container based Virtual Laboratory for Internet Security e-Learning. International Journal of Learning and Teaching. IJLT. pp. 121–128 (2016).
15.
Rantzsch, H., Yang, H., Meinel, C.: Signature embedding: Writer independent offline signature verification with deep metric learning. International symposium on visual computing. pp. 616–625. Springer (2016).
16.
Staubitz, T., Klement, H., Teusner, R., Renz, J., Meinel, C.: CodeOcean - A Versatile Platform for Practical Programming Excercises in Online Environments. Proceedings of IEEE Global Engineering Education Conference (EDUCON2016). IEEE (2016).
17.
Staubitz, T., Teusner, R., Prakash, N., Meinel, C.: Cellular Automata as Basis for Programming Exercises in a MOOC on Testdriven Development. IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE2016). IEEE (2016).
Programming tasks are an important part of teaching computer programming as they foster students to develop essential programming skills and techniques through practice. The design of educational problems plays a crucial role in the extent to which the experiential knowledge is imparted to the learner both in terms of quality and quantity. Badly designed tasks have been known to put-off students from practicing programming. Hence, there is a need for carefully designed problems. Cellular Automata programming lends itself as a very suitable candidate among problems designed for programming practice. In this paper we describe how various types of problems can be designed using concepts from Cellular Automata and discuss the features which make them good practice problems with regard to instructional pedagogy. We also present a case study on a Cellular Automata programming exercise used in a MOOC on Test Driven Development using JUnit, and discuss the automated evaluation of code submissions and the feedback about the reception of this exercise by participants in this course.
18.
Rezaei, M., Yang, H., Meinel, C.: Brain Abnormality Detection by Deep Convolutional Neural Network. arXiv preprint arXiv:1708.05206. (2016).
19.
Knuth, M., Hartig, O., Sack, H.: Scheduling Refresh Queries for Keeping Results from a SPARQL Endpoint Up-to-Date. Proc. of the 15th International Conference on Ontologies, DataBases, and Applications of Semantics (ODBASE 2016) (2016).
Many datasets change over time. As a consequence, long-running applications that cache and repeatedly use query results obtained from a SPARQL endpoint may resubmit the queries regularly to ensure up-to-dateness of the results. While this approach may be feasible if the number of such regular refresh queries is manageable, with an increasing number of applications adopting this approach, the SPARQL endpoint may become overloaded with such refresh queries. A more scalable approach would be to use a middle-ware component at which the applications register their queries and get notified with updated query results once the results have changed. Then, this middle-ware can schedule the repeated execution of the refresh queries without overloading the endpoint. In this paper, we study the problem of scheduling refresh queries for a large number of registered queries by assuming an overload-avoiding upper bound on the length of a regular time slot available for testing refresh queries. We investigate a variety of scheduling strategies and compare them experimentally in terms of time slots needed before they recognize changes and number of changes that they miss.
20.
Knuth, M., Hartig, O., Sack, H.: Scheduling Refresh Queries for Keeping Results from a SPARQL Endpoint Up-to-Date (Extended Version). CoRR. abs/1608.08130, (2016).
Many datasets change over time. As a consequence, long-running applications that cache and repeatedly use query results obtained from a SPARQL endpoint may resubmit the queries regularly to ensure up-to-dateness of the results. While this approach may be feasible if the number of such regular refresh queries is manageable, with an increasing number of applications adopting this approach, the SPARQL endpoint may become overloaded with such refresh queries. A more scalable approach would be to use a middle-ware component at which the applications register their queries and get notified with updated query results once the results have changed. Then, this middle-ware can schedule the repeated execution of the refresh queries without overloading the endpoint. In this paper, we study the problem of scheduling refresh queries for a large number of registered queries by assuming an overload-avoiding upper bound on the length of a regular time slot available for testing refresh queries. We investigate a variety of scheduling strategies and compare them experimentally in terms of time slots needed before they recognize changes and number of changes that they miss.
21.
Marufu, A., Kayem, A.V., Wolthusen, S.D.: Fault-Tolerant Distributed Continuous Double Auctioning on Computationally Constrained Microgrids. In Proceedings of the 2nd International Conference on Information Systems Security and Privacy - Volume 1: ICISSP, 448-456, 2016, Rome, Italy. pp. 448–456 (2016).
In this article we show that a mutual exclusion protocol supporting continuous double auctioning for power trading on computationally constrained microgrid can be fault tolerant. Fault tolerance allows the CDA algorithm to operate reliably and contributes to overall grid stability and robustness. Contrary to fault tolerance approaches proposed in the literature which bypass faulty nodes through a network reconfiguration process, our approach masks crash failures of cluster head nodes through redundancy. Masking failure of the main node ensures the dependent cluster nodes hosting trading agents are not isolated from auctioning. A rendundant component acts as a backup which takes over if the primary components fails, allowing for some fault tolerance and a graceful degradation of the network. Our proposed fault-tolerant CDA algorithm has a complexity of O(N) time and a check-pointing message complexity of O(W). N is the number of messages exchanged per critical section. W is the number of check-pointing messages.
22.
Staubitz, T., Brehm, M., Jasper, J., Werkmeister, T., Teusner, R., Willems, C., Renz, J., Meinel, C.: Vagrant Virtual Machines for Hands-On Exercises in Massive Open Online Courses. Smart Education and e-Learning 2016. pp. 363–373. Springer International Publishing (2016).
In many MOOCs hands-on exercises are a key component. Their format must be deliberately planned to satisfy the needs of a more and more heterogeneous student body. At the same time, costs have to be kept low for maintenance and support on the course provider’s side. The paper at hand reports about our experiments with a tool called Vagrant in this context. It has been successfully employed for use cases similar to ours and thus promises to be an option for achieving our goals.
23.
Che, X., Luo, S., Yang, H., Meinel, C.: Sentence Boundary Detection Based on Parallel Lexical and Acoustic Models. Presented at the (2016).
24.
Che, X., Wang, C., Yang, H., Meinel, C.: Punctuation prediction for unsegmented transcript based on word vector. Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16). pp. 654–658 (2016).
25.
Wang, C., Yang, H., Meinel, C.: Exploring multimodal video representation for action recognition. 2016 International Joint Conference on Neural Networks (IJCNN). pp. 1924–1931. IEEE (2016).
26.
Krentz, K.-F., Meinel, C., Schnjakin, M.: POTR: Practical On-the-fly Rejection of Injected and Replayed 802.15.4 Frames. Proceedings of the International Conference on Availability, Reliability and Security (ARES 2016). IEEE, Salzburg, Austria (2016).
The practice of rejecting injected and replayed 802.15.4 frames only after they were received leaves 802.15.4 nodes vulnerable to broadcast and droplet attacks. Basically, in broadcast and droplet attacks, an attacker injects or replays plenty of 802.15.4 frames. As a result, victim 802.15.4 nodes stay in receive mode for extended periods of time and expend their limited energy. He et al. considered embedding one-time passwords in the synchronization headers of 802.15.4 frames so as to avoid that 802.15.4 nodes detect injected and replayed 802.15.4 frames in the first place. However, He et al.’s, as well as similar proposals lack support for broadcast frames and depend on special hardware. In this paper, we propose Practical On-the-fly Rejection (POTR) to reject injected and replayed 802.15.4 frames early during receipt. Unlike previous proposals, POTR supports broadcast frames and can be implemented with many off-the-shelf 802.15.4 transceivers. In fact, we implemented POTR with CC2538 transceivers, as well as integrated POTR into the Contiki operating system. Furthermore, we demonstrate that, compared to using no defense, POTR reduces the time that 802.15.4 nodes stay in receive mode upon receiving an injected or replayed 802.15.4 frame by a factor of up to 16. Beyond that, POTR has a small processing and memory overhead, and incurs no communication overhead.
27.
Xiaoyin Che, S.L., Meinel, C.: An attempt at mooc localization for chinese-speaking users. International Journal of Information and Education Technology, Volume 6, Number 2. pp. 90–96 (2016).
Abstract—“Internetworking with TCP/IP” is a massive open online course (MOOC) provided by Germany-based MOOC platform “openHPI”, which has been offered in German, English and – recently – Chinese respectively, with similar content. In this paper, the authors, who worked jointly as a teacher (or as teaching assistants) in this course, want to share their ideas derived from daily teaching experiences, analysis of the statistics, comparison between the performance in different language offers and the feedback from user questionnaires. Additionally, the motivation, attempt and suggestion at MOOC localization will also be discussed.
28.
Fernández, J.D., Umbrich, J., Polleres, A., Knuth, M.: Evaluating Query and Storage Strategies for RDF Archives. Proceedings of the 12th International Conference on Semantic Systems. pp. 41–48. ACM (2016).
There is an emerging demand on efficiently archiving and (temporal) querying different versions of evolving semantic Web data. As novel archiving systems are starting to address this challenge, foundations/standards for benchmarking RDF archives are needed to evaluate its storage space efficiency and the performance of different retrieval operations. To this end, we provide theoretical foundations on the design of data and queries to evaluate emerging RDF archiving systems. Then, we instantiate these foundations along a concrete set of queries on the basis of a real-world evolving dataset. Finally, we perform an empirical evaluation of various current archiving techniques and querying strategies on this data. Our work comprises -- to the best of our knowledge -- the first benchmark for querying evolving RDF data archives.
29.
Waitelonis, J., Jürges, H., Sack, H.: Don’t compare Apples to Oranges - Extending GERBIL for a fine grained NEL evaluation. Proc. of 12th Int. Conf. on Semantic Systems (SEMANTICS 2016) (2016).
In recent years, named entity linking (NEL) tools were primarily developed as general approaches, whereas today numerous tools are focusing on specific domains such as e.g. the mapping of persons and organizations only, or the annotation of locations or events in microposts. However, the available benchmark datasets used for the evaluation of NEL tools do not respect this focalizing trend. We have analyzed the evaluation process applied in the NEL benchmarking framework GERBIL and its benchmark datasets. Based on these insights we extend the GERBIL framework to enable a more fine grained evaluation and in deep analysis of the used benchmark datasets according to different emphases. In this paper, we present the implementation of an adaptive filter for arbitrary entities as well as a system to automatically measure benchmark dataset properties, such as the extent of content-related ambiguity and diversity. The implementation as well as a result visualization are integrated in the publicly available GERBIL framework.
30.
Kayem, A.V., Vester, C., Meinel, C.: Automated k-Anonymization and l-Diversity for Shared Data Privacy. In: Hartmann, S. and Ma, H. (eds.) In Proceedings, 27th International Conference on Database and Expert Systems Applications, DEXA 2016, Porto, Portugal, September 5-8, 2016, Part I. pp. 105–120. Springer (2016).
Analyzing data is a cost-intensive process, particularly for organizations lacking the necessary in-house human and computational capital. Data analytics outsourcing offers a cost-effective solution, but data sensitivity and query response time requirements, make data protection a necessary pre-processing step. For performance and privacy reasons, anonymization is preferred over encryption. Yet, manual anonymization is time-intensive and error-prone. Automated anonymization is a better alternative but requires satisfying the conflicting objectives of utility and privacy. In this paper, we present an automated anonymization scheme that extends the standard k-anonymization and l-diversity algorithms to satisfy the dual objectives of data utility and privacy. We use a multi-objective optimization scheme that employs a weighting mechanism, to minimise information loss and maximize privacy. Our results show that automating l-diversity results in an added average information loss of 7 % over automated k-anonymization, but in a diversity of between 9–14 % in comparison to 10–30 % in k-anonymised datasets. The lesson that emerges is that automated l-diversity offers better privacy than k-anonymization and with negligible information loss.
31.
Wang, C., Yang, H., Bartz, C., Meinel, C.: Image Captioning with Deep Bidirectional LSTMs. Proceedings of the 2016 ACM on Multimedia Conference. pp. 988–997. ACM, Amsterdam, The Netherlands (2016).
32.
Rantzsch, H., Yang, H., Meinel, C.: Signature Embedding: Writer Independent Offline Signature Verification with Deep Metric Learning. In: Bebis, G., Boyle, R., Parvin, B., Koracin, D., Porikli, F., Skaff, S., Entezari, A., Min, J., Iwai, D., Sadagic, A., Scheidegger, C., and Isenberg, T. (eds.) Advances in Visual Computing: 12th International Symposium, ISVC 2016, Las Vegas, NV, USA, December 12-14, 2016, Proceedings, Part II. pp. 616–625. Springer International Publishing, Cham (2016).
The handwritten signature is widely employed and accepted as a proof of a person's identity. In our everyday life, it is often verified manually, yet only casually. As a result, the need for automatic signature verification arises. In this paper, we propose a new approach to the writer independent verification of offline signatures. Our approach, named Signature Embedding, is based on deep metric learning. Comparing triplets of two genuine and one forged signature, our system learns to embed signatures into a high-dimensional space, in which the Euclidean distance functions as a metric of their similarity. Our system ranks best in nearly all evaluation metrics from the ICDAR SigWiComp 2013 challenge. The evaluation shows a high generality of our system: being trained exclusively on Latin script signatures, it outperforms the other systems even for signatures in Japanese script.
33.
Knuth, M., Waitelonis, J., Sack, H.: I am a Machine, let me understand Web Media!. Web Engineering: 16th International Conference, ICWE 2016, Lugano, Switzerland, June 6-9, 2016. Proceedings. pp. 467–475. Springer International Publishing (2016).
34.
Staubitz, T., Petrick, D., Bauer, M., Renz, J., Meinel, C.: Improving the Peer Assessment Experience on MOOC Platforms. Proceedings of ACM Learning at Scale Conference (L@S2016). ACM (2016).
Massive Open Online Courses (MOOCs) have revolutionized higher education by offering university-like courses for a large amount of learners via the Internet. The paper at hand takes a closer look on peer assessment as a tool for delivering individualized feedback and engaging assignments to MOOC participants. Benefits, such as scalability for MOOCs and higher order learning, and challenges, such as grading accuracy and rogue reviewers, are described. Common practices and the state-of-the-art to counteract challenges are highlighted. Based on this research, the paper at hand describes a peer assessment workflow and its implementation on the openHPI and openSAP MOOC platforms. This workflow combines the best practices of existing peer assessment tools and introduces some small but crucial improvements.
35.
Marufu, A.M., Kayem, A.V., Wolthusen, S.D.: Circumventing Cheating on Power Auctioning in Resource Constrained Micro-Grids. In Proceedings, 14th IEEE International Conference on Smart City (SmartCity 2016), Sydney, Australia Dec. 12-14, 2016 (2016).
Decentralized Continuous Double Auctioning offers a flexible marketing approach to power distribution in resource constrained (RC) smart micro-grids. Grid participants (buyers and sellers) can obtain power at a suitable price both at on or off-peak periods. Decentralized CDA schemes are however vulnerable to two attacks, namely - ‘Victim Strategy Downgrade’ and ‘Collusion’. Both attacks foil the CDA scheme by allowing an individual to gain surplus energy that leads to low allocative efficiency, which is undesirable for maintaining grid stability and reliability. In this paper we propose a novel scheme to circumvent power auction cheating attacks. Our scheme works by employing an exception handling mechanism that employs cheating detection and resolution algorithms. Our correctness and complexity analysis demonstrates that the solution is both sound and performance efficient under resource constrained conditions.
36.
Tietz, T., Jäger, J., Waitelonis, J., Sack, H.: Semantic Annotation and Information Visualization for Blogposts with refer. CEUR. pp. 28–40. V. Ivanova, P. Lambrix, S. Lohmann, C. Pesquita (2016).
37.
Tietz, T., Waitelonis, J., Jäger, J., Sack, H.: refer: a Linked Data based Text Annotation and Recommender System for Wordpress. Proceedings of 15th International Semantic Web Conference, Demo track. T. Kawamura, H. Paulheim (2016).
38.
Renz, J., Schwerer, F., Meinel, C.: openSAP: Evaluating xMOOC Usage and Challenges for Scalable and Open Enterprise Education. Proceedings of the Eighth International Conference on E-Learning in the Workplace (2016).
39.
Debattista, J., Fernández, J.D., Knuth, M., Kontokostas, D., Rula, A., Umbrich, J., Zaveri, A.: Joint proceedings of the 2nd Workshop on Managing the Evolution and Preservation of the Data Web (MEPDaW 2016) and the 3rd Workshop on Linked Data Quality (LDQ 2016). CEUR.org (2016).
This joint volume of proceedings gathers together papers from the 2nd Workshop on Managing the Evolution and Preservation of the Data Web (MEPDaW) and the 3rd Workshop on Linked Data Quality (LDQ), held on the 30th of May of 2016 during the 13th ESWC conference in Anissaras, Crete, Greece.
40.
Ambassa, P.L., Kayem, A.V., Wolthusen, S.D., Meinel, C.: Privacy Violations in Constrained Micro-Grids: Adversarial Cases. In Proceedings of the 30th IEEE International Conference on Advanced Information Networking and Applications Workshops (WAINA 2016), March 23-25, 2016, Crans-Montana, Switzerland. pp. 601–606 (2016).
Smart micro-grid architectures are small scale electricity provision networks composed of individual electricity providers and consumers. Supporting micro-grids with computationally limited devices, is a cost-effective approach to service provisioning in resource-limited settings. However, the limited availability of real time measurements and the unreliable communication network makes the use of Advanced Metering Infrastructure (AMI) for monitoring and control a challenging problem. Grid operation and stability are therefore reliant on inaccurate and incomplete information. Consequently, data gathering and analytics raise privacy concerns for grid users, which is undesirable. In this paper, we study adversarial scenarios for the privacy violations on micro-grids. We consider two types of privacy threats in constrained micro-grids, namely inferential and aggregation attacks. The reason is that both attacks capture scenarios that can be used to provoke energy theft and destabilize the grid. Grid destabilzation leads to distrust between suppliers and consumers. This work provides a roadmap towards a secure and resilient smart micro-grid energy networks.
41.
Shaabani, N., Meinel, C.: Detecting Maximum Inclusion Dependencies without Candidate Generation. Database and Expert Systems Applications: 27th International Conference, DEXA 2016, Porto, Portugal, September 5-8, 2016, Proceedings, Part II. pp. 118–133 (2016).
Inclusion dependencies (INDs) within and across databases are an important relationship for many applications in data integration, schema (re-)design, integrity checking, or query optimization. Existing techniques for detecting all INDs need to generate IND candidates and test their validity in the given data instance. However, the major disadvantage of this approach is the exponentially growing number of data accesses in terms of the number of SQL queries as well as I/O operations. We introduce Mind2, a new approach for detecting n-ary INDs (n > 1) without any candidate generation. Mind2 implements a new characterization of the maximum INDs we developed in this paper. This characterization is based on set operations defined on certain metadata that Mind2generates by accessing the database only 2 x the number of valid unary INDs. Thus, Mind2 eliminates the exponential number of data accesses needed by existing approaches. Furthermore, the experiments show that Mind2 is significantly more scalable than hypergraph-based approaches.
42.
Dojchinovski, M., Kontokostas, D., Rößling, R., Knuth, M., Hellmann, S.: DBpedia Links: The Hub of Links for the Web of Data. Proceedings of the SEMANTiCS 2016 Conference (SEMANTiCS 2016) (2016).
Links are the key enabler for retrieval of related information on the Web of Data. Currently, DBpedia is one of the central interlinking hubs in the Linked Open Data (LOD) cloud. With over 28 million of described and localized things it is one of the largest and open datasets. With the increasing number of linked datasets, there is need for proper maintenance of these links. In this paper, we describe the DBpedia Links repository, which maintains linksets between DBpedia and other LOD datasets. We describe the system for maintenance, update and quality assurance of the linksets.
43.
Jaeger, D., Pelchen, C., Graupner, H., Cheng, F., Meinel, C.: Analysis of Publicly Leaked Credentials and the Long Story of Password (Re-)use. Proceedings of the 11th International Conference on Passwords (PASSWORDS2016). Springer, Bochum, Germany (2016).
44.
Staubitz, T., Teusner, R., Renz, J., Meinel, C.: Automatisierte Online-Aufsicht im Kontext der Wertigkeit von Zertifikaten einer MOOC Plattform. DeLFI 2016 - Die 14. E-Learning Fachtagung Informatik, 11.-14. September 2016, Potsdam. pp. 125–136 (2016).
Die Glaubwürdigkeit und Überprüfbarkeit der Zertifikate ist ein wesentlicher Bestandteil von jeglicher Form von zertifiziertem Training. Diese Aussage gilt natürlich auch für MOOCs. In diesem Kontext kommt allerdings erschwerend hinzu, dass eine individuelle, persönliche Beaufsichtigung der Prüfungen tausender TeilnehmerInnen offline nur schwer zu realisieren ist. Es wird daher eine Technik benötigt, diese Beaufsichtigung online durchzuführen, um die Vertrauenswürdigkeit oder Wertigkeit dieser Zertifikate zu erhöhen. In dieser Studie vergleichen wir verschiedene Spielarten der Online-Aufsicht. Wir stellen die Ergebnisse einiger Umfragen unter unseren TeilnehmerInnen, die sich mit deren Sicht bezüglich der Wertigkeit der Zertifikate befassen, vor und bewerten die Aussagen in unserem Kontext. Schließlich stellen wir ein Experiment vor, das wir mit einer neuen Variante der Online-Aufsicht durchgeführt haben. Anstatt sich auf menschliche Augen zu verlassen, wird ein automatisierter Abgleich des Gesichts vor der Kamera mit einem hinterlegten Bild durchgeführt, um zu überprüfen ob die angemeldete TeilnehmerIn auch die ist, die die Prüfung ablegt.
45.
Malchow, M., Bauer, M., Meinel, C.: Couch Learning Mode: Multiple-Video Lecture Playlist Selection out of a Lecture Video Archive for E-learning Students. Proceedings of the 2016 ACM on SIGUCCS Annual Conference. pp. 77–82. ACM (2016).
During a video recorded university class students have to watch several hours of video content. This can easily add up to several days of video content during a semester. Naturally, not all 90 minutes of a typical lecture are relevant for the exam. When the semester ends with a final exam students have to study more intensively the important parts of all the lectures. To simplify the learning process and design it to be more efficient we have introduced the Couch Learning Mode in our lecture video archive. With this approach students can create custom playlists out of the video lecture archive with a time frame for every selected video. Finally, students can lean back and watch all relevant video parts consecutively for the exam without being interrupted. Additionally, the students can share their playlists with other students or they can use the video search to watch all relevant lecture videos about a topic. This approach uses playlists and HTML5 technologies to realize the consecutive video playback. Furthermore, the powerful Lecture Butler search engine is used to find worthwhile video parts for certain topics. Our approach shows that we have more satisfied students using the manual playlist creation to view reasonable parts for an exam. Finally, students are keen on watching the top search results showing reasonable parts of lectures for a topic of interest. The Couch Learning Mode supports and motivates students to learn with video lectures for an exam and daily life.
46.
Wenzel, M., Klinger, A., Meinel, C.: Tele-Board Prototyper - Distributed 3D Modeling in a Web-Based Real-Time Collaboration System. 2016 International Conference on Collaboration Technologies and Systems. pp. 447–453. IEEE (2016).
Prototypes help people to externalize their ideas and are a basic element for gathering feedback on an early product design. Prototyping is oftentimes a team-based method traditionally involving physical and analog tools. At the same time, collaboration among geographically dispersed team members becomes more and more standard practice for companies and research teams. Therefore, a growing need arises for collaborative prototyping environments. We present a standards compliant, web browser-based real-time remote 3D modeling system. We utilize cross-platform WebGL rendering API for hardware accelerated visualization of 3D models. Synchronization relies on WebSocket-based message interchange over a centralized Node.js real-time collaboration server. In a first co-located user test, participants were able to rebuild physical prototypes without having prior knowledge of the system. This way, the provided system design and its implementation can serve as a basis for visual real-time collaboration systems available across a multitude of hardware devices.
47.
Alibabaie, N., Mohammad Ghasemzadeh, C.M.: A variant of genetic algorithm for non-homogeneous population. International Conference on Applied Mathematics, Computational Science and Systems Engineering , Sapienza University, Italy-Rome (2016).
Selection of initial points, the number of clusters and finding proper clusters centers are still the main challenge in clustering processes. In this paper, we suggest genetic algorithm based method which searches several solution spaces simultaneously. The solution spaces are population groups consisting of elements with similar structure. Elements in a group have the same size, while elements in different groups are of different sizes. The proposed algorithm processes the population in groups of chromosomes with one gene, two genes to k genes. These genes hold corresponding information about the cluster centers. In the proposed method, the crossover and mutation operators can accept parents with different sizes, this can lead to versatility in population and information transfer among sub-populations. We implemented the proposed method and evaluated its performance against some random datasets and the Ruspini dataset as well. The experimental results show that the proposed method could effectively determine the appropriate number of clusters and recognize their centers. Overall this research implies that using heterogeneous population in the genetic algorithm can lead to better results.
48.
Agt-Rickauer, H., Waitelonis, J., Tietz, T., Sack, H.: Data Integration for the Media Value Chain. Proceedings of the ISWC 2016 Posters & Demonstrations Track. CEUR-WS.org, Kobe, Japan (2016).
49.
Khazaei, A., Mohammad Ghasemzadeh, C.M.: Solution Prediction for Vulnerabilities using Textual Data. 13th International Conference on Applied Computing , University of Mannheim, Germany. pp. 200–204 (2016).
This paper reports an in progress research project. Each year many software vulnerabilities are discovered and reported. These vulnerabilities can lead to system exploitations and consequently finance and information losses. Soon after detection of vulnerabilities, requests for solutions arise. Usually it takes some time and effort until an effective solution is provided. Therefore it is very desirable to have an automated vulnerability solution predictor. In this paper we introduce an effective approach to achieve such a predictive system. In the first step, by using text mining techniques, we extract some features from the available textual data concerning vulnerabilities. Due to the pattern of the existing overlap between different categories of vulnerabilities and their solutions, we found the overlapping clustering algorithm to be the most suitable method to cluster them. After that, we attempt to find the existing relationship among the obtained clusters. In the last step, we benefit from machine learning methods to construct the requested solution predictor. In our approach we propose an automated quick workaround solution, in workaround solutions, users do not need to wait for a patch or a new version of software but they bypass a problem caused by vulnerability with additional effort to avoid its damages.
50.
Hentschel, C., Wiradarma, T.P., Sack, H.: Fine tuning CNNS with scarce training data - Adapting imagenet to art epoch classification. 2016 IEEE International Conference on Image Processing (ICIP). pp. 3693–3697. IEEE (2016).
Deep Convolutional Neural Networks (CNN) have recently been shown to outperform previous state of the art approaches for image classification. Their success must in parts be attributed to the availability of large labeled training sets such as provided by the ImageNet benchmarking initiative. When training data is scarce, however, CNNs have proven to fail to learn descriptive features. Recent research shows that supervised pre-training on external data followed by domain-specific fine-tuning yields a significant performance boost when external data and target domain show similar visual characteristics. Transfer-learning from a base task to a highly emphdissimilar target task, however, has not yet been fully investigated. In this paper, we analyze the performance of different feature representations for classification of paintings into art epochs. Specifically, we evaluate the impact of training set sizes on CNNs trained with and without external data and compare the obtained models to linear models based on Improved Fisher Encodings. Our results underline the superior performance of fine-tuned CNNs but likewise propose Fisher Encodings in scenarios were training data is limited.
51.
Che, X., Luo, S., Yang, H., Meinel, C.: Sentence Boundary Detection Based on Parallel Lexical and Acoustic Models. Proceedings of Interspeech 2016. pp. 257–261. , San Francisco, CA, USA (2016).
In this paper we propose a solution that detects sentence boundary from speech transcript. First we train a pure lexical model with deep neural network, which takes word vectors as the only input feature. Then a simple acoustic model is also prepared. Because the models work independently, they can be trained with different data. In next step, the posterior probabilities of both lexical and acoustic models will be involved in a heuristic 2-stage joint decision scheme to classify the sentence boundary positions. This approach ensures that the models can be updated or switched freely in actual use. Evaluation on TED Talks shows that the proposed lexical model can achieve good results: 75.5% accuracy on error-involved ASR transcripts and 82.4% on error-free manual references. The joint decision scheme can further improve the accuracy by 3�~10% when acoustic data is available.
52.
Amirkhanyan, A., Meinel, C.: Analysis of the Value of Public Geotagged Data from Twitter from the Perspective of Providing Situational Awareness. Proceedings of the 15th IFIP Conference on e-Business, e-Services and e-Society (I3E2016) - Social Media: The Good, the Bad, and the Ugly. Springer, Swansea, Wales, UK (2016).
In the era of social networks, we have a huge amount of social geotagged data that reflect the real world. These data can be used to provide or to enhance situational and public safety awareness. It can be reached by the way of analysis and visualization of geotagged data that can help to better understand the situation around and to detect local geo-spatial threats. One of the challenges in the way of reaching this goal is providing valuable statistics and advanced methods for filtering data. Therefore, in the scope of this paper, we collect sufficient amount of public social geotagged data from Twitter, build different valuable statistics and analyze them. Also, we try to find valuable parameters and propose the useful filters based on these parameters that can filter data from invaluable data and, by this way, support analysis of geotagged data from the perspective of providing situational awareness.
53.
Luo, S., Yang, H., Wang, C., Che, X., Meinel, C.: Real-time action recognition in surveillance videos using ConvNets. International Conference on Neural Information Processing. pp. 529–537. Springer (2016).
The explosive growth of surveillance cameras and its 7 * 24 recording period brings massive surveillance videos data. Therefore how to efficiently retrieve the rare but important event information inside the videos is eager to be solved. Recently deep convolutinal networks shows its outstanding performance in event recognition on general videos. Hence we study the characteristic of surveillance video context and propose a very competitive ConvNets approach for real-time event recognition on surveillance videos. Our approach adopts two-steam ConvNets to respectively recognition spatial and temporal information of one action. In particular, we propose to use fast feature cascades and motion history image as the template of spatial and temporal stream. We conducted our experiments on UCF-ARG and UT-interaction dataset. The experimental results show that our approach acquires superior recognition accuracy and runs in real-time.
54.
Luo, S., Yang, H., Wang, C., Che, X., Meinel, C.: Action Recognition in Surveillance Video Using ConvNets and Motion History Image. Artificial Neural Networks and Machine Learning – ICANN 2016. pp. 187–195. Springer (2016).
With significant increasing of surveillance cameras, the amount of surveillance videos is growing rapidly. Thereby how to automatically and efficiently recognize semantic actions and events in surveillance videos becomes an important problem to be addressed. In this paper, we investigate the state-of-the-art Deep Learning (DL) approaches for human action recognition, and propose an improved two-stream ConvNets architecture for this task. In particular, we propose to use Motion History Image (MHI) as motion expression for training the temporal ConvNet, which achieved impressive results in both accuracy and recognition speed. In our experiment, we conducted an in-depth study to investigate important network options and compared to the latest deep network for action recognition. The detailed evaluation results show the superior ability of our proposed approach, which achieves state-of-the-art in surveillance video context.
55.
Che, X., Staubitz, T., Yang, H., Meinel, C.: Pre-Course Key Segment Analysis of Online Lecture Videos. Proceedings of The 16th IEEE International Conference on Advanced Learning Technology (ICALT2016). , Austin, Texas, USA (2016).
In this paper we propose a method to evaluate the importance of lecture video segments in online courses. The video will be first segmented based on the slide transition. Then we evaluate the importance of each segment based on our analysis of the teacher’s focus. This focus is mainly identified by exploring features in the slide and the speech. Since the whole analysis process is based on multimedia materials, it could be done before the official start of the course. By setting survey questions and collecting forum statistics in the MOOC “Web Technologies”, the proposed method is evaluated. Both the general trend and the high accuracy of selected key segments (over 70%) prove the effectiveness of the proposed method.
56.
Wang, C., Yang, H., Meinel, C.: Exploring multimodal video representation for action recognition. 2016 International Joint Conference on Neural Networks (IJCNN). pp. 1924–1931 (2016).
57.
Filipiak, D., Agt-Rickauer, H., Hentschel, C., Filipowska, A., Sack, H.: Quantitative Analysis of Art Market Using Ontologies, Named Entity Recognition and Machine Learning: A Case Study. Proceedings of the 19th International Conference on Business Information Systems (BIS 2016). pp. 79–90. , Leipzig, Germany (2016).
58.
Musavi, S., Mohammad Ghasemzadeh, C.M.: Geometric Design by Interactive and Evolutionary Design Methods. VII European Congress of Methodology, Palma de Mallorca, Balearic Islands (Spain). p. 47 (2016).
59.
Che, X., Wang, C., Yang, H., Meinel, C.: Punctuation Prediction for Unsegmented Transcript Based on Word Vector. Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016). pp. 654–658. , Portorož, Slovenia (2016).
In this paper we propose an approach to predict punctuation marks for unsegmented speech transcript. The approach is purely lexical, with pre-trained Word Vectors as the only input. A training model of Deep Neural Network (DNN) or Convolutional Neural Network (CNN) is applied to classify whether a punctuation mark should be inserted after the third word of a 5-words sequence and which kind of punctuation mark the inserted one should be. TED talks within IWSLT dataset are used in both training and evaluation phases. The proposed approach shows its effectiveness by achieving better result than the state-of-the-art lexical solution which works with same type of data, especially when predicting puncuation position only.
60.
Wenzel, M., Meinel, C.: Full-Body WebRTC Video Conferencing in a Web-Based Real-Time Collaboration System. Proceedings of the 2016 IEEE 20th International Conference on Computer Supported Cooperative Work in Design. pp. 334–339. IEEE (2016).
Remote collaboration systems are a necessity for geographically dispersed teams in achieving a common goal. Realtime groupware systems frequently provide a shared workspace where users interact with shared artifacts. However, a shared workspace is often not enough for maintaining the awareness of other users. Video conferencing can create a visual context simplifying the user’s communication and understanding. In addition, flexible working modes and modern communication systems allow users to work at any time at any location. It is therefore desirable that a groupware system can run on users’ everyday devices, such as smartphones and tablets, in the same way as on traditional desktop hardware. We present a standards compliant, web browser-based realtime remote collaboration system that includes WebRTC-based video conferencing. It allows a full-body video setup where everyone can see what other participants are doing and where they are pointing in the shared workspace. In contrast to standard WebRTC’s peer-to-peer architecture, our system implements a star topology WebRTC video conferencing. In this way, our solution improves network bandwidth efficiency from a linear to a constant network upstream consumption.
61.
Richly, K., Bothe, M., Rohloff, T., Schwarz, C.: Recognizing Compound Events in Spatio-Temporal Football Data. International Conference on Internet of Things and Big Data (IoTBD). pp. 27–35 (2016).
In the world of football, performance analytics about a player’s skill level and the overall tactics of a match are supportive for the success of a team. These analytics are based on positional data on the one hand and events about the game on the other hand. The positional data of the ball and players is tracked automatically by cameras or via sensors. However, the events are still captured manually by human, which is time-consuming and error-prone. Therefore, this paper introduces an approach to detect events based on the positional data of football matches. We trained and aggregated the machine learning algorithms Support Vector Machine, K-Nearest Neighbours and Random Forest, based on features, which were calculated on base of the positional data. We evaluated the quality of our approach by comparing the recall and precision of the results. This allows an assessment of how event detection in football matches can be improved by automating this process based on spatio-temporal data. We discovered, that it is possible to detect football events from positional data. Nevertheless, the choice of a specific algorithm has a strong influence on the quality of the predicted results.
62.
Malchow, M., Renz, J., Bauer, M., Meinel, C.: Improved E-learning Experience with Embedded LED System. 2016 Annual IEEE Systems Conference (SysCon). IEEE (2016).
During the last years, e-learning has become more and more important. There are several approaches like teleteaching or MOOCs to delivers knowledge information to the students on different topics. But, a major problem most learning platforms have is, students often get demotivated fast. This is caused e.g. by solving similar tasks again and again, and learning alone on the personal computer. To avoid this situation in coding-based courses one possible way could be the use of embedded devices. This approach increases the practical programming part and should push motivation to the students. This paper presents a possibility to the use of embedded systems with an LED panel to motivate students to use programming languages and solve the course successfully. To analyze the successfulness of this approach, it was tested within a MOOC called "Java for beginners" with 11,712 participants. The result was evaluated by personal feedback of the students and user data was analyzed to measure the acceptance and motivation of students by solving the embedded system tasks. The result shows that the approach is well accepted by the students and they are more motivated by tasks with real hardware support.
63.
Malchow, M., Renz, J., Bauer, M., Meinel, C.: Enhance Embedded System E-learning Experience with Sensors. 2016 IEEE Global Engineering Education Conference (EDUCON). pp. 175–183. IEEE (2016).
Earlier research shows that using an embedded LED system motivates students to learn programming languages in massive open online courses (MOOCs) efficiently. Since this earlier approach was very successful the system should be improved to increase the learning experience for students during programming exercises. The problem of the current system is that only a static image was shown on the LED matrix controlled by students’ array programming over the embedded system. The idea of this paper to change this static behavior into a dynamic display of information on the LED matrix by the use of sensors which are connected with the embedded system. For this approach a light sensor and a temperature sensor are connected to an analog-to-digital converter (ADC) port of the embedded system. These sensors' values can be read by the students to compute the correct output for the LED matrix. The result is captured and sent back to the students for direct feedback. Furthermore, unit tests can be used to automatically evaluate the programming results. The system was evaluated during a MOOC course about web technologies using JavaScript. Evaluation results are taken from the student’s feedback and an evaluation of the students’ code executions on the system. The positive feedback and the evaluation of the students’ executions, which shows a higher amount of code executions compared to standard programming tasks and the fact that students solving these tasks have overall better course results, highlight the advantage of the approach. Due to the evaluation results, this approach should be used in e-learning e.g. MOOCs teaching programming languages to increase the learning experience and motivate students to learn programming.
64.
Sianipar, J., Willems, C., Meinel, C.: Crowdsourcing Virtual Laboratory Architecture on Hybrid Cloud. INTED2016 Proceedings. 10th International Technology, Education and Development Conference Valencia, Spain. 7-9 March, 2016. pp. 2940–2949. IATED (2016).
Virtual Laboratory is needed for practical, hands-on exercises in e-learning courses. The E-learning system needs to provide a specific laboratory environment for a specific learning unit. A Virtual laboratory system with a high requirements learning units, is struggling in serving a large number of users, because the available hardware resources are limited and the budget to provide more resources is low. The number of e-Learning users that simultaneously access the virtual laboratory is varied. In this paper, we propose an architecture of a virtual laboratory system for a large number of users. A person or a company can contribute in providing hardware resources in crowdsourcing manner. This system uses Hybrid cloud platform to be able to scale out and scale in rapidly. The architecture is able to expand by receiving more hardware resources from a person or a company that is willing to contribute. The resources can be anywhere but must be connected to the Internet. For example, if a user has a Virtual Machine (VM) in the cloud or in his own bare metal system connected to the Internet, he can integrate his VM into the Virtual laboratory system. Because the e-learning system is a non-profit system, we assumed that some users and companies are willing to contribute. We use Tele-lab architecture as a based to create the proposed architecture. The Tele-lab is a virtual laboratory for Internet Security e-learning. The Tele-lab uses a private cloud (openNebula) to provide VMs and Containers that are used to represent hosts in a Virtual laboratory. In our architecture as also in the Tele-Lab, there is a frontend and a backend. The frontend is providing an interface to the users. In our architecture, we focus on the backend to be able to provide a virtual laboratory that can serve a large number of users. In the architecture, we use a middleware to provide a communication between a private cloud and a public cloud and also communication between the Virtual laboratory system and the resources that belong to the crowd. This work is part of the continuous improvement on Tele-Lab to make it more reliable and more scalable. We are heading toward using Tele-Lab in the implementation of Massive Open Online Course (MOOC)
65.
Bauer, M., Malchow, M., Staubitz, T., Meinel, C.: Improving Collaborative Learning With Video Lectures. INTED2016 Proceedings. 10th International Technology, Education and Development Conference Valencia, Spain. 7-9 March, 2016. pp. 5511–5517. IATED (2016).
We have addressed the problems of independent e-lecture learning with an approach involving collaborative learning with lecture recordings. In order to make this type of learning possible, we have prototypically enhanced the video player of a lecture video platform with functionality that allows simultaneous viewing of a lecture on two or more computers. While watching the video, synchronization of the playback and every click event, such as play, pause, seek, and playback speed adjustment can be carried out. We have also added the option of annotating slides. With this approach, it is possible for learners to watch a lecture together, even though they are in different places. In this way, the benefits of collaborative learning can also be used when learning online. Now, it is more likely that learners stay focused on the lecture for a longer time (as the collaboration creates an additional obligation not to leave early and desert a friend). Furthermore, the learning outcome is higher because learners can ask their friends questions and explain things to each other as well as mark important points in the lecture video.
66.
Bauer, M., Malchow, M., Meinel, C.: Schrittweiser Umbau einer Lernvideo-Plattform zur Unterstützung von HTML5 und HTTP-Videostreaming. Tagungsband GML². p. 320 ff (2016).
67.
Bauer, M., Malchow, M., Staubitz, T., Meinel, C.: Improving Collaborative Learning With Video Lectures. INTED2016 Proceedings. 10th International Technology, Education and Development ConferenceValencia, Spain. 7-9 March, 2016. pp. 5511–5517. IATED (2016).
We have addressed the problems of independent e-lecture learning with an approach involving collaborative learning with lecture recordings. In order to make this type of learning possible, we have prototypically enhanced the video player of a lecture video platform with functionality that allows simultaneous viewing of a lecture on two or more computers. While watching the video, synchronization of the playback and every click event, such as play, pause, seek, and playback speed adjustment can be carried out. We have also added the option of annotating slides. With this approach, it is possible for learners to watch a lecture together, even though they are in different places. In this way, the benefits of collaborative learning can also be used when learning online. Now, it is more likely that learners stay focused on the lecture for a longer time (as the collaboration creates an additional obligation not to leave early and desert a friend). Furthermore, the learning outcome is higher because learners can ask their friends questions and explain things to each other as well as mark important points in the lecture video.
68.
Amirkhanyan, A., Meinel, C.: Visualization and Analysis of Public Social Geodata to Provide Situational Awareness. Proceedings of the 8th International Conference on Advanced Computational Intelligence (ICACI2016). IEEE, Chiang Mai, Thailand (2016).
Nowadays, social networks are an essential part of modern life. People posts everything what happens with them and what happens around them. The amount of data, producing by social networks, increases dramatically every year and users more often post geo-tagged messages. It gives us more possibilities for visualization and analysis of social data, since we can be interested not only in the content of the message but also in the location, from where this message was posted. We aimed to use public social data from location-based social networks to improve situational awareness. In the paper, we show our approach of handling in real-time geodata from Twitter and providing the advanced methods for visualization, analysis, searching and statistics, in order to improve situational awareness.