Survey Methods
This past summer, Codio conducted a survey to better understand the state of computer science education in higher education institutions in the United States.
US-based computer science and engineering school lecturers and professors from a proprietary database of 4,419 were emailed a URL to the survey, which was hosted on Qualtrics.
The survey consisted of 20 questions covering topics including courses taught, number of students, course administration, grading/feedback for students, curriculum creation and development, as well as information about their institutions’ technology setup. Participants were also asked about the types of resources they use with the courses they teach.
Qualtrics software estimated that the survey would take the average respondent 12 minutes to complete.
82 CS and STEM instructors (professors and lecturers) participated in the survey. Respondents were not required to complete any of the questions, resulting in partial data for 33 participants and a full data set for 49 participants. In the analyses below, any respondent with the required questions answered was included, even if they only provided partial data. For example, below, we analyze class size vs. GA/TA resources. Participants who completed both of those fields were included even if they did not complete other demographics or background questions.
This way of recruiting participants and analyzing data has some clear limitations. Given the total number of CS Educators, the number of respondents is quite small. This is compounded by the recruitment being based on a database built in a proprietary, non-randomized or representative way and respondents being limited to the US. In terms of analysis, the use of incomplete data as described above could cause slight misrepresentations between analyses since the participant sets in each analysis could shift slightly.
Respondent Demographics/Context
A quick categorization of submitted position titles reveals a wide range of respondents’ titles from graduate students to department chairs. The majority of respondents were in teaching, tenure-track, or tenured faculty positions. Based on job title, 65.3% appeared to be tenure track, compared to the national average of approximately 82% reported by the National Academies of Sciences, Engineering, and Medicine (2018). This could be due to the increasing growth of teaching faculty at universities and colleges. Teaching positions grew 67% versus the 22% growth in tenure track positions (Ibid).
Categorizing all listed programming languages being taught in reported courses (with some grouping for those with smaller counts), we see that Java, Python and C/C++/C# account for 60% of the programming languages students are learning. This mirrors what we see in the top spots of the TIOBE index whose rankings as of September 2019 are Java, C, Python, C++, and C# (TIOBE Index for September 2019, 2019).
Looking at the wide variety of courses being taught by respondents, it is notable that 39% of courses reported were either a CS1 (Introduction to CS/Programming) or CS2 (Data Structures) course. Below illustrates the distribution of students between courses with over 100 students. The students below represent about 85% of the reported student count—the rest were spread out of a variety of smaller courses such as ethics, math and statistics courses, and IT courses. We see that there is a surprisingly large amount of students in business courses that teach flavors of programming (often Visual Basic within Excel). This could be the result of an increase in non-CS majors enrolled in CS courses—a rising trend in higher education, with some institutions introducing special mandates for programming courses for certain non-CS majors and others requiring that all non-CS majors take at least one programming course in order to meet the requirements for graduation (Chilana et al, 2015).

Looking at the average class size for these course titles, the largest five courses on average were: Business (383 students), Computer Systems (156 students), Computer Architecture (83 students), Compilers (78 students), and Capstone (62 students). The smallest classes averaging enrollments of less than 20 were: CS for non-majors (17 students), Algorithms (16 students), Ethics (15 students), and Databases (13 students).
There did not appear to be any pattern to average class size based on course such as the order in CS course trajectory or perceived difficulty of content. In contrast, courses typically required for a CS major had higher counts than those often offered as electives. For example, following CS1 and CS2 are Embedded Programming (often an Assembly course), and Programming Languages with eleven each compared to Data Science which has a count of four.
CS is taught in a range of classroom formats, from one-on-one tutoring to traditional university lectures to Massive Online Open Courses (MOOCs). In total, respondents taught 12,620 students over the 2018/2019 academic year. The average class size for respondents is 70 students, with the reported class size ranging from 7 to 2,000. Each instructor reported teaching an average of 2.6 classes.

Just under half (47%) of participating instructors taught at least one class of 50+ students, while 53% of respondents taught courses of only 49 or less during the 2018-19 academic year. We consider a class of 50 or more “large-scale” and teachers of those classes as “teaching-at-scale.”
What Technology Benefits Are Teachers Interested In?
While shifting trends in CS education are causing higher adoption rates for technology, there is a non-trivial cost to this adoption process. Given this barrier to change, it is important to know which benefits (or the addressing of which pain points) make this effort worthwhile for a teacher.
We asked teachers to indicate their interest level on 12 potential platform benefits. 89.2% were interested in reducing grading time. Similarly, 78.9% were interested in the availability of auto-grading. The second and third most interesting potential features were higher student engagement (79.4%) and higher student satisfaction with the course (78.9%). In the next section, we discuss the corresponding pain points in order of magnitude based on these results (grading/feedback, curricular challenges, and course administration).
While most instructors were rather disinterested in reducing the cost of student VMs (43% not interested), GAs (42% not interested), and physical labs (34% not interested), they did note interest in reducing the cost of textbooks (72% interested). In later sections, we explore who is paying for these resources to shed some light on this divide of interests.

Educator Challenges and Needs

For years, soaring CS enrollment has been widely reported as causing increasing demand for CS courses across the board. The current enrollment “boom” is causing a unique set of challenges for CS educators.
Even with the movement to hire more teaching faculty, the large quantity of unfilled posted positions due to lack of supply is indicative of the exponentially growing gap between teachers and students. This is understandable when “approximately 57 percent of all new [Computer Science] Ph.D.s in North America take a position in industry” (National Academies of Sciences, Engineering, and Medicine, 2018).
To help clarify the challenges of this growing disparity between demand and resources, we asked participating instructors to rank their “pain points” or challenges in terms of feedback/grading, curriculum creation and setup, and course admin.
Grading Challenges
There are many challenges around grading: inadequate GA/TA resources, providing timely feedback to students, and notably in CS, the incompatibility of a student’s development environment and the grading environment (resulting in the dreaded “but it worked on my machine!”). When asked to rank grading challenges, manual grading time was overwhelmingly ranked as the greatest pain point, with 72% of respondents placing it first.
Manual grading is a bigger concern for those with small classes (< 50 students), with 79% saying it was their greatest challenge in terms of grading/feedback. 63% of instructors with large classes ranked manual grading time as their highest-priority pain point.

The majority of respondents (63.9%) lacked access to grading or teaching assistants. Of those who reported having GA/TA support in their largest class, 36.4% had two or more for the 2018/2019 academic year. The distribution of grading support looked strikingly different for teachers with under 100 students (small-scale) versus those with over 100 students (large-scale).
For respondents teaching small classes, there was no discernible correlation between the total number of students and the total number of TAs per professor. The majority of teachers in this group do not receive any TA support. This probably contributes to the higher concern amongst this segment than instructors in larger classes which often have grading help.
A weak correlation between total students and TAs for a professor was found for those teaching at scale. The model equation has a coefficient equivalent to a 1:73 TA to student ratio. Looking at the chart reveals a handful of teachers with courses well below that daunting ratio. This indicates that many teachers are overburdened and lack the necessary support, indicating a need for scalable technology as student-to-teacher ratios increase.
Anecdotally, we know that many departments have either homebrew submission systems or resort to manual grading. Some use built-in LMS features like rubrics, or existing free tools like codeworkout and web-CAT for auto-grading. Others use paid services like Gradescope (which was recently acquired by Turnitin, a plagiarism tool) which allows both manual and auto-grading. It is clear that there is a demand for an easy-to-use, auto-grading product that gives meaningful feedback to students.
Curriculum Challenges
The second and third most potentially interesting platform benefits—higher student engagement and satisfaction—are related to curriculum challenges. These challenges include not only low engagement but also high attrition, acting as technical support and configuring machines. When asked to rank these pain points, 27% of respondents indicated that low student engagement was the highest-priority pain point. When averaging the ranks, acting as technical support topped the list of challenges. While at first, these results might seem muddled, clarity arises when the data is split by scale.
Looking at smaller scale teaching, configuring software on student machines placed in one of the top 2 spots for 50% tied with acting as technical support with 50% prioritizing it in one of the top 2 spots. These pain points are primarily technological in nature.
In contrast, those in larger-scale teaching contexts were primarily concerned about low student engagement/satisfaction with 53% of respondents ranking it as the top or second challenge. This priority is primarily pedagogical in nature.
Slicing this data another way, if we consider configuring machines and acting as technical support to be part of “Technology Concerns” and low student engagement/satisfaction and high attrition to be part of “Pedagogical Concerns” the difference is quite clear:

Large-scale teachers are more concerned about pedagogical challenges where small-scale teachers are more concerned with technological challenges. This may be due to the difference in resources available. As we saw with TA resources above, large-scale teachers often have more support. This external support could free the large-scale teachers of other concerns and allow them to focus on pedagogical concerns. As the distribution of these types of resources is often outside of the teacher’s purview, this leads us to the third type of challenge—course administration challenges.

Course Administration Challenges

With 76% of teachers interested in “time savings” from technology, some of that pressure is probably due to course administration. Respondents were asked to tank increasing student-to-teacher ratios, mandatory LMS usage (Canvas, Blackboard, Moodle), and inappropriate classroom/lab space.
56% ranked increasing student-to-teacher ratios as their highest priority pain point. The 53% of respondents who indicated inappropriate space as their second concern have a 78% overlap of those with the first concern being increasing ratios—making these concerns related.
Physical labs are wide-spread, with 77% of respondents having them. A variety of digital options have emerged—with 58% reporting some combination of virtual labs, student VMs, or cloud options.

Looking at the distribution of these resources, 81% of those teaching-at-scale indicated their institution uses physical lab space. 74% of teachers with small classes indicated that their institution uses physical lab space. As we have seen—it appears that, across the board, at-scale teaching has access to more technological resources.
Overall, the majority of respondents’ institutions cover the cost of the physical labs. 61% of respondents’ institutions cover the cost of labs, while 26% say their students are responsible. Taking a look at student VMs, there were many who were unsure how the VMs were being paid for (40%), but a similar proportion of universities cover these costs (60%) compared to physical labs.
Summary of Challenges
Looking across the pain points above, we notice consistent differences in resources for those teaching at scale versus instructing smaller classes. This leads to those in smaller courses having other concerns over manual grading, struggling with technological challenges in the curriculum instead of pedagogy, and feeling disproportionately burdened with the increasing student-to-teacher ratios.
Curricular Resources Usage and Needs
Slowly textbooks have lost market share as teachers create, customize, and adopt Open Educational Resources (OERs). Others have turned to paid platforms such as Zybooks or even digital tools created by those same textbook publishers. Looking at the distribution of reported resource usage, OERs were used to deliver course materials by a significantly higher proportion of respondents than paid resources.
9.5% of respondents reported using paid resources alone, while 43% reported using only OERs. 47.6% of respondents indicated that they utilized both paid and open educational resources to support their course-delivery. This large subset could be indicative of teachers’ desire to customize content—52% factor in “ability to customize course materials” into their tech decisions (Top Hat, 2018). Most paid services which make money on the content itself do not allow editing, making OERs the more flexible option.
The majority of instructors teaching-at-scale used only OER materials. 95% reported using OERs alone or with paid resources. According to the survey results, teachers with smaller class sizes are more than twice as likely to use paid resources only.
Strikingly, of small class teachers, 33% reported using only OERs. The higher percent of paid resources (67% compared to 44%) could be due to several factors. Those teaching smaller classes might be at smaller institutions meaning that instead of teaching multiple sections of one course, each course they teach could need individual prep. This would exponentially increase the amount of time these teachers would need to curate and customize OERs, making off-the-shelf content appealing. Alternatively, these small classes could be highly specialized—making the availability of OERs much lower for those topics.
Textbooks remain widely used. However, they are expensive, and the high cost can have a detrimental impact on student satisfaction. 94% of students are responsible for paying for their own textbooks. No teachers-at-scale reported that their institution covered the cost of their students’ textbooks. 58% of all respondents who use paid resources (whether in combination with OERs or not) said their students were responsible for covering the cost of “platforms that offer interactive textbooks.”

Technology Platform Needs
We now have an interesting conundrum—what platform benefits are professor’s finding so useful they are willing to take on the cost of the platform? We asked teachers to denote each of the following features as Very Useful, Neutral, or Least Useful.

Most Appealing Features
The majority of instructors were interested in curricular assessment resources, favoring editable, auto-graded questions over questions in text or a simple bank of questions. Similarly, instructors ranked editable text over a simple PDF or digital text. This reiterates teachers’ increasing need to customize content for their context. It also emphasizes teachers’ desire for technology to take on some of the grading burden—with much less interest in the analysis of those grades at any level larger than the individual student.

Finally, having considered the challenges, current resources, and the potential benefits in CS Higher Ed, we are left with the question: What would make the cost of switching worth it? With only 65% of teachers actively looking to or open to switching within the next 12 months, it is a high bar. We asked which of the above features would cause teachers to consider switching.
Many of the results were similar—notably, a downloadable PDF and the student dashboard dropped on teacher’s priority lists meaning they are more “nice to have” rather than “need to have.” A digital textbook that allowed students to write and run code jumped from 6th to 3rd on teachers’ priorities list. This feature is interestingly specific to CS; thus, the desire might be due to many more general education platforms not currently supporting this feature.
Conclusion
There are several interesting trends revealed throughout this report. Teachers are feeling the increasing student-to-teacher ratio reported in the news. These pressures are felt most acutely in the increased grading burden, particularly in contexts without TA support. Similarly, teachers are feeling the strain of technology setup for small classes and pedagogically in large classes where attrition is increasing, and student engagement is falling.
The new wave of content platforms should allow teachers to modify content to their context and fill in perceived holes as indicated by the increasing use of supplemental materials being used in conjunction with paid resources. These tools should provide grading (notably auto-grading) support—not just to relieve the grading burden but also to give students rich and instant feedback on their understanding. Unique to our field, there is a willingness to switch products for a platform that allows students ample coding practice alongside the text—with the ability to create, edit, and run code.
Interestingly, there is a disparity in resources between institutions. This disparity appears along the dimension of class size, which probably predicts department and institution size. This means that price discounts by volume (a common practice) might exacerbate this inequity of access. This issue becomes more important given the shift of who is paying for these resources in some institutions. As more universities foot the bill for platforms and receive volume discounts, this puts smaller colleges at a disadvantage, unlike textbooks which the vast majority of institutions rely on students to purchase individually.