maxresdefaultA few years ago, when I was teaching Freshman English in South Korea, our school purchased several thousand licenses for an automated essay grading system, the darling of a very large educational conglomerate. This system shall go unnamed, but let’s call it SuckTron3000. SuckTron3000’s ostensible goal was to take students’ essays and – beep, bop, boop – return immediate, accurate, and understandable feedback – as well as grades – to students and teachers alike. We tried it ourselves, with sample essays from our coursebooks, from previous students’ work still littering our desks, with any text we could get our hands on. The result? Garbage. Well-formatted, easy to understand, data-heavy garbage. Sure, the SuckTron3000 could mostly wrangle tense problems, identify run-on sentences, flag spelling errors, and the like. But in terms of content, cohesion, thesis statements, relevance, and everything else that delineates a strong essay from a weak one, the SuckTron3000 was just so much garbage.

Of course, we were obligated to use all the licences the school had purchased.

Naturally, I am skeptical of such systems. They will, eventually, play a role in testing regimes around the world, I have no doubt. But so long as artificial neural networks and quantum computing remain costly and rudimentary, they aren’t ready for prime time. Or so I thought. This article on Slate intrigued me. WriteLab doesn’t grade students’ work, for one, which is a point in its favor. Secondly, it uses an interesting, Socratic approach to giving feedback. Third, it does something that is confoundingly vexing: it encourages students to revise their writing.

I haven’t tried it, and the article quotes one of WriteLab’s developers as saying that many students stop using it after a few months (they grow frustrated with the Socratic questioning, preferring that the system just tell them what to do…sigh, millennials.), but it still sounds pretty nifty to me.

Take that, SuckTron300.

Advertisements