Lessons in the Future of EdTech from Learning at Scale 2021

8:30 AM on July 16, 2021 | computer science

From Tuesday, June 22nd through Thursday, June 24th members of the Codio team attended the Learning @ Scale/EMOOCS Conferences. EMOOCs brought together experts in the area of MOOCs (Massive Online Open Courses) and e-Learning in general from European regions. Learning @ Scale (L@S) focused specifically on addressing the urgent challenges derived from the COVID-19 pandemic.

The conference featured a keynote from Coursera CEO, Jeff Maggioncalda, who discussed how institutions can collaborate to respond to the global skills crisis by increasing access and opportunities to high-quality online learning and well-paying, remote jobs. 

Attending conferences, such as L@S empowers our team to make informed decisions, rooted in research and user-experience, when it comes to product improvements—always with the goal to provide a better platform for our instructors and learners.

Download Info-Sheet

Below you can find the unfiltered experiences and opinions of Mohit Chandarana, Data Products & Machine Learning Research Scientists and Khalia Braswell, Learning Analytics & User Experience Research Intern.

Mohit Chandarana, Data Products & Machine Learning Research Scientist

Learning @ Scale, as a conference, had an overarching theme that spanned a lot of different subject areas which converged on large-scale, technology-mediated learning environments. The common thread between these environments is they typically have many active learners and few experts on hand to guide their progress or respond to individual needs[1]. 2021 marks the second year that L@S has morphed into a completely virtual event, based in Potsdam, Germany this year. The perfect example to explain how it has changed the landscape of this conference into a global event is to take a look at the way the paper presentations were organized - a total of 5 sessions, each representative of the perspectives from a specific and different part of the world, like North America, The US East and West Coasts, Europe, and Oceania! This also meant that these sessions were spread across all timezones and centered around Central European Time and it was infeasible for one to attend each and every one of them.

The proceedings were divided into four major categories:

  1. Global Perspectives - research and synthesis papers
  2. Workshops
  3. Demonstrations
  4. Works-in-Progress

Key Takeaways: Global Perspectives’ and Works-In-Progress

Each of the 5 Global Perspectives sessions had its own underlying theme that most of the papers in that session touched. For instance, the US East Coast session revolved around the theme of learner sourcing questions in a Massive Open Online Course (MOOC), which has garnered a lot of interest in the research community over time. 

The past couple of years have seen an increase in the use of Data Mining and Machine Learning in CS ed related academic research and this year’s conference was no exception. It was exciting to see these techniques in use for Understanding how Learners Build Cognitive Presence[2], Suggesting Feedback in Textual Exercises[3], and Automating the Assessment of Problem Solving Practices[4].

What I found super interesting about the Works-In-Progress this year was the research around Automatic Question Generation [5] [6]. Studies like these, along with the increasing emphasis on rich and just-in-time feedback for auto-graded assessments, are paving the way for next-gen Intelligent Tutoring Systems for MOOCs and other at-scale learning environments, that will radically change and improve asynchronous and online learning.

Khalia Braswell, Learning Analytics & User Experience Research Intern

Although it’s been a year since we’ve all had to adapt to online conferences, I still found myself struggling with attending sessions at their appropriate times, mainly because everything was based on Central European Time. This caused me to miss a few sessions that I really wanted to attend so I read the papers instead. Here's the insight I gained:

Programming Hints

I attended a few Work-In-Program & Demos, starting with “Exploring Design Choices in Data-Driven Hints for Python Programming Homework” by Thomas W. Price, Samiha Marwan, and Joseph Jay Williams. I am very interested in how to help novice programmers learn computer science and this team of researchers created CodeChecker, a system that generates hints using student data for CS1 online homework. Once a student submits their computer science homework, they are able to see their code with suggestions that a student can explore further. They found that students liked the visual nature of the hints, as it allowed them to know exactly where to look in their code. Generally, when you’re running code in an IDE and you receive an error, you’re given a line number and a vague error message. CodeChecker helps students understand where the hints are and gives the student the choice on which hint to focus on. 

I think something like this could be promising in Codio as students complete assignments and projects that build upon each other.  

Here is an example of a Code Suggestion

Learnersourcing Questions in a MOOC

Ever consider letting students create their own Multiple Choice Questions (MCQ) for content that they’re learning? In “What’s In It for the Learners? Evidence from a RandomizedField Experiment on Learnersourcing Questions in a MOOC”, Anjali Singh and her colleagues discuss their study where they conducted a randomized field experiment that required one group of students to generate MCQ’s for an intro data science MOOC and made it optional for another group to generate MCQ’s for the same course. They found that students valued creating questions; however, they created higher quality questions when they chose to do it compared to when they were required. The group of students who chose not to create any questions stated that they lacked confidence in their ability; however, those who did contribute felt as though they were helping future learners. I think this approach has promise in large classes and can increase engagement among students as they master new material.

Diversity, Equity, and Inclusion Dashboard

I also had the chance to talk to Kimberley Williamson, a PhD Student at Cornell University about her paper “Learning Analytics Dashboard Research Has Neglected Diversity, Equity, and Inclusion”. This paper was interesting to me because part of my role at Codio this summer is to help with analytics dashboards for teachers. After talking to Kimberly more, I realized her project wasn’t a direct alignment with our research; however, it was great to learn more about her work. Essentially, she is working with multiple departments across Cornell University to create a dashboard that will allow administration to make key decisions about Diversity, Equity, and Inclusion, which is HUGE! I can see how that could be extended beyond Cornell to other institutions to truly get a pulse on what’s going on at their university and how they can improve. I’ll definitely be looking out for more from Kimberly.

Learn more

References

[1] 2021. Proceedings of the Eighth ACM Conference on Learning @ Scale. Association for Computing Machinery, New York, NY, USA.

[2] John Hosmer and Jeonghyun Lee. 2021. How Online Learners Build Cognitive Presence: Implications from a Machine Learning Approach. In Proceedings of the Eighth ACM Conference on Learning @ Scale (L@S '21). Association for Computing Machinery, New York, NY, USA, 351–354. DOI:https://doi.org/10.1145/3430895.3460986

[3] Jan Philip Bernius, Stephan Krusche, and Bernd Bruegge. 2021. A Machine Learning Approach for Suggesting Feedback in Textual Exercises in Large Courses. In Proceedings of the Eighth ACM Conference on Learning @ Scale (L@S '21). Association for Computing Machinery, New York, NY, USA, 173–182. DOI:https://doi.org/10.1145/3430895.3460135

[4] Karen D. Wang, Shima Salehi, Max Arseneault, Krishnan Nair, and Carl Wieman. 2021. Automating the Assessment of Problem-solving Practices Using Log Data and Data Mining Techniques. In Proceedings of the Eighth ACM Conference on Learning @ Scale (L@S '21). Association for Computing Machinery, New York, NY, USA, 69–76. DOI:https://doi.org/10.1145/3430895.3460127

[5] Rachel Van Campenhout, Nick Brown, Bill Jerome, Jeffrey S. Dittel, and Benny G. Johnson. 2021. Toward Effective Courseware at Scale: Investigating Automatically Generated Questions as Formative Practice. In Proceedings of the Eighth ACM Conference on Learning @ Scale (L@S '21). Association for Computing Machinery, New York, NY, USA, 295–298. DOI:https://doi.org/10.1145/3430895.3460162

[6] Bill Jerome, Rachel Van Campenhout, and Benny G. Johnson. 2021. Automatic Question Generation and the SmartStart Application. In Proceedings of the Eighth ACM Conference on Learning @ Scale (L@S '21). Association for Computing Machinery, New York, NY, USA, 365–366. DOI:https://doi.org/10.1145/3430895.3460878