6 minute read

I had and have the pleasure to take part in Object Orientated Programming, a course taught at the TU Wien this semester. OOP is a mandatory course, taken by a couple of hundred students each and every semester. It might also be one of the most critiqued courses in the whole curriculum. Or at least, that’s the impression you get if you talk to other students.

I know what you are thinking, people like to rant about anything. That’s what I thought before taking the course too. My reasoning was, as usual, ‘How bad could it be?’ To be fair, it’s not all bad. But there certainly are glaring issues with the way the course is organized.

An introduction to OOP

Before I get to that, let me tell you what OOP (the course) is about. There’s two parts, lectures and exercises. During the lecture, students are taught some basic principles of OOP, using examples written in Java. Inheritance, Generics, Loose Coupling, that kind of stuff. The exercises aim to force students to apply the principles learned in the lecture. Each week, there’s a new task which usually includes writing a couple of classes - again, using Java. These tasks have to be finished in groups of three. Usually, those tasks include a set of very loose functional requirements, a couple of paragraphs highlighting what you should pay special notice to, and a couple of constraints, such as ‘Don’t you dare to use generics!’.

Now, let me describe the mayor issue I have with this course. It’s really just one. A lot of people complain about the lecture and how boring it is, but that’s not really different from many other courses. I think the actual content of the lecture is also fine. What I really have a problem with is the practical exercises.

For one, they take ages to be evaluated. Like, two or three weeks at times. Sure doesn’t sound like much, but imagine just having finishing the task for the third week. Now you get the feedback that whatever you did in the first week was terrible. Obviously, because you did not know, your tasks for the second and third week are also terrible, because you made the exact same mistakes again. Not something that I think students should just have to ‘live with’. Not the worst thing either, true, but a good indication that something with the exercises in general is off.

I get it, there’s just two Professors, and they have to correct a couple of hundred submissions each week, and that takes a ton of time. Also, the submissions are themselves not easy to evaluate, because there’s almost no guidelines that students can adhere to. Which means that each submission has the potential to be totally different from the one before. I think that really is the root of the problem. Above I mentioned ‘Loose functional requirements’. Basically all that students have to go by are statements like ‘You should write a class representing this thing that does that’. No further indication. This gives the students the possibility to write, anything, really. That is a problem not only for the professors, who take way more time to correct things than they should, but also for students. It’s simply very frustrating to not actually know what to do, and fabricating code without having any indication of what you are doing is correct.

What’s more than that, too general requirements in this context even serve as a kind of ‘obfuscation’. In my experience, having to focus on what the exercise text actually means, or what is desired by the authorities, hinders one in connecting what was learned in the lecture to the actual task. In simpler words: Student’s often fail to realize what the benefits of certain language features. And I don’t think you can fault them for that, because all the noise produced by weird and fuzzy task descriptions kind of drowns out the actually important stuff - like actually understanding inheritance, and why loose coupling is good.

Solutions

So, now that I’ve ranted enough about what is wrong with the exercises I’ll propose a couple of improvements. Okay, I don’t have any experience in teaching university courses, so what would I know about improving this particular course? Well, I could just point to other courses that just get things right. There’s hardly the amount of complaints about TGI, Datenbanksysteme, or Funktionale Programmierung. It’s certainly true that all courses have different requirements, and that there’s no silver bullet when it comes to the design of practical tasks. However, when observing other courses one sees that there are characteristics for exercises and their evaluation that are beneficial.

  1. Simplicity: Obviously exercises don’t need to be simple to solve. I don’t think they should be, otherwise you get what students call ‘a gschenktes fach’, which translates to a gift, basically. What should be simple is the task description. If I have to study the task description for about 30 minutes before I get what I am actually supposed to do, you are doing it wrong. The problem posed by exercises should not translate to ‘what’ but rather ‘how’. For example: I am supposed to write a program that does this and that, okay, how should I go about doing that?

  2. Relevance: Students should understand how a certain exercise connects to the things learned in the lecture. At the very least. It would be even better if students understand some practical uses that things have, but I realize that this is generally hard to do. Then again, I think OOP has it easier there than other courses.

  3. Transparency: When an exercise is evaluated, students should understand what they did wrong. The criteria for point deductions should be easily understandable and public. What should go without saying: All students are treated equally. Which is easy enough to evaluate if the criteria for evaluation are publicly available.

I think I already highlighted enough how OOP exercises fail to match those characteristics. How could they be restructured to better satisfy the above criteria/characteristics? In order to improve simplicity exercises should be split in shorter tasks which themselves have short descriptions. Some of those tasks could be actual coding tasks, as the whole exercise is now, and some of those could even be theoretical questions. Problems posed could include some more well known problems, such as implementing certain patterns or solving issues in program design. This would give Students the option to do research themselves, if the lecture material proves to be insufficient.

Additionally, it would partly address the issue of relevance. Imagine giving students simple re-factoring tasks on two classes, one well designed and one not, and having them count the amount of lines they actually had to change in each. Or have them implement the functionality of a collection of classes using only one (using generics) - thus demonstrating the benefits of this language feature. There sure are a lot of possibilities. Granted, it won’t be easy to design tasks in a way that provide relevance, but I think one should strive to do so anyway.

Now to address transparency. Usually courses have something like a grade scale, highlighting which parts of the exercise give how many points. Granted, OOP does this too. However, the grade scale itself suffers from the ever present problem of ‘unspecificeness’. I propose using automated tests for the majority of tasks. It works well enough for other courses. Where tasks can not be evaluated using tests, they need to be controlled manually. This might be needed for theoretical questions, or for bigger tasks where students write a lot of code freely. However, the instances where manual control is required should be kept to a minimum. This would not only improve transparency, but also if parts of the tests are made public too, students have an even easier time understanding what a certain task is about.

After reading all that, maybe you ask yourself how exactly a good exercise in OOP would look like. Well, so do I. If my motivation holds, or rather, if my frustration with OOP continues I might try to design one or two exercise sheets myself. Just for fun. Stay tuned I guess.

Updated: