Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Working around R/Exams not having "follow-on"

Tags:

moodle

r-exams

Our department is just starting the process of moving our questions into R/Exams. Up until now we've been using an in-house system that allows for "follow-on". What I'm meaning by "follow-on" is that, let's say part (b) of a question uses the answer to part (a), then if the student gets part (a) wrong but correctly uses their incorrect part (a) answer in calculating (b), follow-on would allow them to be marked correctly for part (b).
Not having follow-on (as R/Exams doesn't) would mean they'll get part (b) wrong for their mistake in part (a), even if they did part (b) correctly.

I'm therefore wondering how we can get around this.
Some of my initial ideas are below. Really keen to hear any comments on these, what has worked for people in the past, and alternative ideas.
Note that we are using Moodle.

Multiple Tries

One option would be to set up assessments so that students can have multiple, or even an unlimited number of attempts, at a question. Possible issues with this are:

  • Issue 1: If a student can't get an earlier part to a question correct, even after multiple attempts at a question, they still won't be able to get any later parts to the question correct that rely on that. Although the risk of this would be reduced if they are allowed multiple tries.
  • Issue 2: Let's say a question has a large number of parts to it. If a student gets one part wrong, then each time they reattempt the question they'd have to do all of it again, even the parts they did correctly. This could frustrate students. Note that this isn't an issue with not having follow-on specifically: any question with many parts will have this issue.

Setting up questions so that later sub-questions don't use answers to earlier sub-questions

We could design the questions so that the information needed to answer each question part doesn't include the student's answer to any earlier question parts. For example, a standard confidence intervals question would go something like:

  • Calculate the upper and lower limits of the interval.
  • Interpret the interval.

Such a setup would be problematic without follow-on, because if a student made a mistake in calculating the interval, their interpretation could be incorrect only because their interval was incorrect, but not because they couldn't interpret their interval correctly. However, if instead of asking students to state the upper and lower limits of the interval, we went something like:

  • State the R command or formula used to calculate the interval.
  • Assuming the UL and LL are ..., select the appropriate interpretation.

In this latter setup, since the student's answer to the calculation part isn't used in the interpretation part, not being able to calculate the interval correctly wouldn't rule out them being able to get the interpretation part correct.

Showing Earlier Answers

Let's say part (b) of a question uses the answer to part (a). One idea would be to set up assessments so that students have the option to see the answer to part (a) if they can't get the answer to it. To do this, if using Moodle we would need parts (a) and (b) to appear as separate questions in order for students to see the answer to part (a) without seeing the answer to part (b). Ideally, we would also need some way of ensuring parts (a) and (b) use the same random values and dataset. Ensuring separate questions use the same random values and dataset isn't possible in Moodle currently when randomly selecting questions into quizzes (see Achim's answer in this discussion). But if we select fixed questions into quizzes, from Achim's idea in this thread, we could generate a small number (say, n) of different instances of each question part (linking them in R/Exams by using a shared global environment when generating each part (a) and (b) instance, as discussed here), and create n different quizzes where the ith quiz uses the ith instance of each question part, then manually allocate each student to one of the n quizzes. This feels a bit cumbersome however.

Extending R/Exams to allow follow-on

The Moodle "formulas" question type does allow for follow-on. So in theory, a separate version of the exams2Moodle function could output questions in the Moodle formulas question type (rather than just as cloze, numerical, etc). I may get time to develop something like this, depending on the difficulty of the task. How much work would be required to develop this? And could this, if developed, be included in the package?
Note that the other Moodle question type that supports follow-on is STACK, so in theory, we could do the same with STACK questions to implement follow-on. But STACK isn't great for writing statistics questions as STACK isn't set up to handle questions with datasets well.

like image 299
Alex Michael Avatar asked Jan 22 '26 22:01

Alex Michael


1 Answers

Thanks for raising this issue which is clearly something that is commonly done in classic written homework but is often not possible in e-learning settings due to technical limitations. Disclaimer: I'll provide some comments here but I don't have a conclusive answer.

How we do it in our own courses

In summative exams we avoid such questions. In formative assessments, we use high-frequency low-stakes tests. Students get a lot of tests but with ample time (several days) and they are allowed to discuss with each other and we even give them some hints ourselves. They also typically get two attempts per question. All of this is for encouraging them to try to get it right and to give and receive support in a collaborative setting.

Pro: Students are not so stressed about the tests because they have time and possibilities to compensate. It's easier for them to accept that there are no compensations within the exercise (such as follow-on). They don't feel lost (or at least "less lost").
Con: Lots of possibilities for not actually learning yourself.

Feedback to your questions

Multiple tries: I think this is very useful but we've had poor experiences with infinite attempts. Then some students keep guessing without trying to understand. But a small limited number of attempts is very nice, especially if we can use it to motivate students to learn more, discuss with their peers, etc.

Independent sub-questions: If you can make the questions fully independent, then this is very useful, especially for summative exams, I think. So in a formative setting you could have one combined question about calculating and interpreting a confidence interval (without follow-on). But in the summative setting you could have separate questions: one for computing a confidence interval and another one for interpreting some given confidence interval.

Showing earlier answers: This could be a solution but I wouldn't know of a good way to do this in Moodle.

Extending R/exams with follow-on: I'm open for discussions here. But I typically try to avoid implementing solutions that are very specific for one learning management system. But if we find a design that at least potentially could also be ported to other systems, we can think about it. In this case, however, the specifics of how the follow-on should be computed will probably depend very much on the learning management system.

Final remark: A short technical note, in case you are not aware of this possibility. In R/exams cloze exercises we also have the "verbatim" clozetype that you can use for allowing several answers, e.g., a fully correct and a partially correct one. For a worked example see the confint3 exercise template.

like image 80
Achim Zeileis Avatar answered Jan 25 '26 04:01

Achim Zeileis



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!