Teusner, R., Hille, T., Hagedorn, C.: Aspects on Finding the Optimal Practical Programming Exercise for MOOCs In: Proceedings of the 47th Annual Frontiers in Education (FIE) Conference. IEEE (2017)
Massive Open Online Courses (MOOCs) focus on manifold subjects, ranging from social sciences over languages to technical skills, and use different means to train the respective skills. MOOCs that are teaching programming skills aim to incorporate practical exercises into the course corpus to give students the hands-on experience necessary for understanding and mastering programming. These exercises, apart from technical challenges, come with a series of questions to be addressed, for example: which fraction of the participants' time should they take (compared to video lectures and other course activities), which difficulty should be aimed for, how much guidance should be offered and how much repetition should be incorporated? The perceived difficulty of a task depends on previous knowledge, supplied hints, the required time for solving and the number of failed attempts the participant made. Furthermore, the detail and accuracy of the problem description, the restrictiveness of the applied test cases and the preparation provided specifically for a given exercise also influence the perceived difficulty of a task. In this paper, we explore the data of three programming courses to find criteria for optimal practical programming exercises. Based on over 3 million executions and scoring runs of participants' task submissions, we aim to deduct exercise difficulty, student patterns in approaching the tasks and potential flaws in task descriptions and preparatory videos. We compare our findings to in class trainings and traditional, mostly video and quiz based MOOCs. Finally, we propose approaches and methods to improve programming courses for participants as well as instructors.
Teusner, R., Rollmann, K.-A., Renz, J.: Taking Informed Action on Student Activity in MOOCs In: Proceedings of the Fourth (2017) ACM Conference on Learning @ Scale. pp. 149–152. ACM, New York, NY, USA (2017)
This paper presents a novel approach to understand specific student behavior in MOOCs. Instructors currently perceive participants only as one homogeneous group. In order to improve learning outcomes, they encourage students to get active in the discussion forum and remind them of approaching deadlines. While these actions are most likely helpful, their actual impact is often not measured. Additionally, it is uncertain whether such generic approaches sometimes cause the opposite effect, as some participants are bothered with irrelevant information. On the basis of fine granular events emitted by our learning platform, we derive metrics and enable teachers to employ clustering, in order to divide the vast field of participants into meaningful subgroups to be addressed individually.
Teusner, R., Matthies, C., Giese, P.: Should I Bug You? Identifying Domain Experts in Software Projects Using Code Complexity Metrics In: 2017 IEEE International Conference on Software Quality, Reliability and Security (QRS). pp. 418–425 (2017)
In any sufficiently complex software system there are experts, having a deeper understanding of parts of the system than others. However, it is not always clear who these experts are and which particular parts of the system they can provide help with. We propose a framework to elicit the expertise of developers and recommend experts by analyzing complexity measures over time. Furthermore, teams can detect those parts of the software for which currently no, or only few experts exist and take preventive actions to keep the collective code knowledge and ownership high. We employed the developed approach at a medium-sized company. The results were evaluated with a survey, comparing the perceived and the computed expertise of developers. We show that aggregated code metrics can be used to identify experts for different software components. The identified experts were rated as acceptable candidates by developers in over 90% of all cases.
Teusner, R., Wittstruck, N., Staubitz, T.: Video Conferencing as a Peephole to MOOC Participants In: 2017 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE). IEEE (2017)
Distance education gained considerable attention with the rise of Massive Open Online Courses (MOOCs). Given the significant role collaboration plays in practical computer science education on campus, it becomes evident that nowadays online course platforms mostly lack the necessary collaborative capabilities. We present a solution to support collaborative programming through video conferencing for practical exercises employed in MOOC contexts. Two user surveys showed that albeit users value the possibilities, privacy concerns remain. We therefore propose to additionally use the technology to face another challenge: MOOCs usually are conceptualized and produced to a large extent before the actual course runtime. Reaction on current events within the course is possible but requires insights on students’ problems. Course conductors can use the tutoring mode in our WebIDE to understand struggling students and potentially uncover topics that lack additional background material or need additional training exercises.
Staubitz, T., Teusner, R., Meinel, C.: Towards a Repository for Open Auto-Gradable Programming Exercises In: 2017 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE) (2017)
Auto-gradable hands-on programming exercises are a key element for scalable programming courses. A variety of auto-graders already exist, however, creating suitable high- quality exercises in a sufficient amount is a very time-consuming and tedious task. One way to approach this problem is to enable sharing auto-gradable exercises between several interested parties. School-teachers, MOOC1 instructors, workshop providers, and university level teachers need programming exercises to provide their students with hands-on experience. Auto-gradability of these exercises is an important requirement. The paper at hand introduces a tool that enables the sharing of such exercises and addresses the various needs and requirements of the different stakeholders.
Staubitz, T., Teusner, R., Meinel, C.: openHPI’s Coding Tool Family: CodeOcean, CodeHarbor, CodePilot In: Automatische Bewertung von Programmieraufgaben (ABP) (2017)
The Hasso Plattner Institute successfully runs a self-developed Massive Open Online Course (MOOC) platform—openHPI—since 2012. MOOCs, even more than classic classroom situations, depend on automated solutions to assess programming exercises. Manual evaluation is not an option due to the massive amount of users that participate in these courses. The paper at hand maps the landscape of tools that are used on openHPI in the context of automated grading of programming exercises. Furthermore, it provides a sneak preview to new features that will be integrated ion the near future. Particularly, we will introduce CodeHarbor, our platform to share auto-gradeable exercises between various online code execution platforms.