09-19-2018 04:17:25 PM -0700
09-19-2018 01:49:53 PM -0700
09-19-2018 06:50:04 AM -0700
09-18-2018 12:35:56 PM -0700
09-18-2018 09:56:59 AM -0700
It looks like you've previously blocked notifications. If you'd like to receive them, please update your browser permissions.
Desktop Notifications are  | 
Get instant alerts on your desktop.
Turn on desktop notifications?
Remind me later.
PJ Media encourages you to read our updated PRIVACY POLICY and COOKIE POLICY.
X


Stretch, grab a late afternoon cup of caffeine and get caught up on the most important news of the day with our Coffee Break newsletter. These are the stories that will fill you in on the world that's spinning outside of your office window - at the moment that you get a chance to take a breath.
Sign up now to save time and stay informed!

Bias Is Not the Only Crisis in University Education

I make a good part of my income tutoring computer science students online. One thing you learn about this is that there are 3,417,612 final exams in computer science coming up and there is at least one panicked student in each one.

Roughly.

As a result, I have a lot of students contacting me to get help them with labs, assignments, and to study for the upcoming finals. Which is fine—I love to teach and that's what I'm there for. But I'm learning something about the way computer science is being taught in university.

Basically, in general, it sucks.

We look at the part of the assignment that says what is to be delivered, which is almost always a program that is to compute some result. Usually, the student submits the program through some online process, and it's graded in part by a program provided by the professor. Ideally, the process results in a grade, practically untouched by human hands.

In real-world software engineering, we call this an "acceptance test," and as a test, it's not a bad thing. Computer programs should more or less deterministically compute the expected result from known inputs, and automating those tests means they're done more often and more thoroughly.

What I'm seeing, however, is that the way students are being taught along with this "untouched by human hands" approach to grading is not working.

Programming is a funny thing -- it's as much a craft as anything, and like woodworking or knitting it isn't enough to know the theory, you need to be taught, well, how to do it. Forty years ago, there were several excellent books on how to program — Systematic Programming and Algorithms + Data Structures = Programs by Niklaus Wirth, Techniques of Program Structure and Design by Ed Yourdon, and Software Tools by Kernighan and Plauger are some of my favorites. But the Wirth books used the Pascal language, which is antiquated and unfashionable; Software Tools uses a preprocessor dialect of FORTRAN, which "real" computer scientists mostly don't even like to remember; and Yourdon's book was directed toward mainframe programmers with COBOL and PL/I.

Modern introductory computer science classes now seem mostly to use Java or Python, or C++ or even C -- all of which are marvelous, useful languages, but all of which presume a good bit of knowledge going in. New students are apparently expected to learn what they need by osmosis. Some of them do—I assume from teaching assistants—and some of them just don't.

This should probably make me happy. I get tutoring students because of this, and since I'm good at explaining how to look at programs and programming, I can charge a fairly high hourly rate. But my students shouldn't need to pay me: programming skill is as essential to a computer science student as the ability to write a coherent paragraph is to an English major. In English departments, it's at least a reasonable assumption that students come in with some practical fluency in the language — computer science students not so much, especially since there are so many programming languages in common use.