1.
Koehler, D., Serth, S., Meinel, C.: Consuming Security: Evaluating Podcasts to Promote Online Learning Integrated with Everyday Life In: 2021 World Engineering Education Forum/Global Engineering Deans Council (WEEF/GEDC). pp. 476–481. IEEE, Madrid, Spain (2021)
Traditional (online) teaching approaches put the student into a video-based, classroom-like situation. When asked to reproduce the content, the student can consciously remember what he learned and answer accordingly. Contrasting, knowledge of IT-security aspects requires sensitization for the topic throughout the daily life of a learner. We learned from interactions with former learners that they sometimes found themselves in situations where they - despite knowing better - still behaved in an undesired way. We thereby conclude that the classroom-based presentation of knowledge in Massive Open Online Courses (MOOCs) is not sufficient for the field of IT-Security Education. Therefore, this work presents an approach to a study to assess and analyze different audio-based methods of conveying knowledge, which can integrate into a learner's everyday life. In the spirit of Open Research, we therefore publish our research questions and chosen methods in order to discuss these within the community. Following, we will study the perception of the proposed education methods by learners and suggest possible improvements for subsequent research.
2.
Serth, S., Köhler, D., Marschke, L., Auringer, F., Hanff, K., Hellenberg, J.-E., Kantusch, T., Paß, M., Meinel, C.: Improving the Scalability and Security of Execution Environments for Auto-Graders in the Context of MOOCs In: Greubel, A., Strickroth, S., and Striewe, M. (eds.) Proceedings of the Fifth Workshop "Automatische Bewertung von Programmieraufgaben" (ABP 2021). pp. 3–10. Gesellschaft für Informatik e.V. (GI), Virtual Event, Germany (2021)
Learning a programming language requires learners to write code themselves, execute their programs interactively, and receive feedback about the correctness of their code. Many approaches with so-called auto-graders exist to grade students' submissions and provide feedback for them automatically. University classes with hundreds of students or Massive Open Online Courses (MOOCs) with thousands of learners often use these systems. Assessing the submissions usually includes executing the students' source code and thus implies requirements on the scalability and security of the systems. In this paper, we evaluate different execution environments and orchestration solutions for auto-graders. We compare the most promising open-source tools regarding their usability in a scalable environment required for MOOCs. According to our evaluation, Nomad, in conjunction with Docker, fulfills most requirements. We derive implications for the productive use of Nomad for an auto-grader in MOOCs.
3.
Serth, S., Teusner, R., Meinel, C.: Impact of Contextual Tips for Auto-Gradable Programming Exercises in MOOCs In: Proceedings of the Eighth ACM Conference on Learning @ Scale. pp. 307–310. ACM, Virtual Event, Germany (2021)
Learners in Massive Open Online Courses offering practical programming exercises face additional challenges next to the actual course content. Beginners have to find approaches to deal with misconceptions and often struggle with the correct syntax while solving the exercises. The paper at hand presents insights from offering contextual tips in a web-based development environment used for practical programming exercises. We measured the effects of our approach in a Python course with 6,000 active students in a hidden A/B test and additionally used qualitative surveys. While a majority of learners valued the assistance, we were unable to show a direct impact on completion rates or average scores. We however noticed that users requesting tips took significantly longer and made more use of other assistance features of the platform than users in our control group. Insights from our study can be used to target beginners with more specific hints and provide additional, context-specific clues as part of the learning material.
4.
Bethge, J., Serth, S., Staubitz, T., Wuttke, T., Nordemann, O., Das, P.-P., Meinel, C.: TransPipe - A Pipeline for Automated Transcription and Translation of Videos In: Proceedings of the 7th European MOOC Stakeholder Summit (EMOOCs 2021). pp. 79–94. Universitätsverlag Potsdam, Potsdam, Germany (2021)
Online learning environments, such as Massive Open Online Courses (MOOCs), often rely on videos as a major component to convey knowledge. However, these videos exclude potential participants who do not understand the lecturer's language, regardless of whether that is due to language unfamiliarity or aural handicaps. Subtitles and/or interactive transcripts solve this issue, ease navigation based on the content, and enable indexing and retrieval by search engines. Although there are several automated speech-to-text converters and translation tools, their quality varies and the process of integrating them can be quite tedious. Thus, in practice, many videos on MOOC platforms only receive subtitles after the course is already finished (if at all) due to a lack of resources. This work describes an approach to tackle this issue by providing a dedicated tool, which is closing this gap between MOOC platforms and transcription and translation tools and offering a simple workflow that can easily be handled by users with a less technical background. The proposed method is designed and evaluated by qualitative interviews with three major MOOC providers.
5.
Serth, S., Staubitz, T., Teusner, R., Meinel, C.: CodeOcean and CodeHarbor: Auto-Grader and Code Repository In: Shaffer, C., Brusilovsky, P., Koedinger, K., and Edwards, S. (eds.) SPLICE 2021 Workshop CS Education Infrastructure for All III: From Ideas to Practice. p. 5. 52nd ACM Technical Symposium on Computer Science Education, Virtual Event (2021)
The Hasso Plattner Institute (HPI) successfully operates a MOOC (Massive Open Online Course) platform since 2012. Since 2013, global enterprises, international organizations, governments, and research projects funded by the German ministry of education are partnering with us to operate their own instances of the platform. The focus of our platform instance is on IT topics, which includes programming courses in different programming languages. An important element of these courses are graded hands-on programming assignments. MOOCs, even more than traditional classroom situations, depend on automated solutions to assess programming exercises. Manual evaluation is not an option due to the massive amount of users that participate in these courses. The paper at hand presents two of the tools developed in this context at the HPI: CodeOceantextemdash an auto-grader for a variety of programming languages, and CodeHarbor, a tool to share auto-gradable programming exercises between various online platforms.