Solutions to Engage & Involve your Audience

Tag: Assessment

  • Assessing Instant Questions lifts in-class engagement

    Assessing Instant Questions lifts in-class engagement

    We know that regularly asking “instant” questions in class is a sure-fire way of lifting and sustaining engagement.  Now, using Xorro-Q, assessing instant questions has become easy – even in class.  Students’ responses to instant questions can be automatically rewarded: either by assessing them for correctness or by recognising the contributor through participation points.

    Using Xorro-Q, questions can be prepared in advance and then “asked” selectively at any time during class.  An “Instant Question” differs from these in that it provides a response form to a question which has not been prepared in advance.  For example: An educator might, in the course of a video, pause the video and “ask” the audience for an interpretation of a particular scene.   If a text response is wanted, then the educator just presses a single button (on Xorro-Q’s Q-Launcher floating toolbar) and the audience devices are served with a text response form.  As responses arrive, the lecturer might pre-view these on another screen, or might prefer to allow them to be seen “live” as a stream, or as a “wordcloud”.  Learn more about Instant Questions

    Instant questions now account for 23% of all in-class (real time) questions asked in Xorro-Q, and it’s rising.

    Instant questions are popular with educators, since they add spontaneity to the students’ classroom esperience, and they do not require preparation.   When using Q-Launcher, instant questions from any context (slide shows, videos etc) are also really – instant!  It’s essential that nothing get in the way of the flow of the session.  By asking instant questions, the Facilitator can easily gauge how effectively the message is getting across, as well as ensuring that everyone in the audience remains actively engaged.  Xorro-Q tracks participation, so a Facilitator can recognise in class participation and award points on this basis without consideration to the “correctness” or otherwise of any contributions.

    The missing factor for many has been the absence of assessment value to instant questions.  Assessment has been limited to questions prepared in advance, and so the motivational value of participation has been somewhat reduced for instant questions.  This has now changed:

    Xorro-Q’s Instant Questions can now be assessed just as instantly as they are launched and answered.

    In Xorro-Q, multi-choice instant questions can now be assessed when the results are in.

    Typically the Facilitator will close the question (to further submissions), then display the results to the group as a bar chart.  By right-clicking any option and setting it to “correct”, the Facilitator now automatically adds to the scores for those participants who chose that option.  It’s easy, and instant; it also encourages participation!

    The benefits of assessing instant questions include:

    • participants are motivated to remain engaged and alert to the topic;
    • questions can be tailored to make them highly relevant and specifc to a topic under discussion;
    • participants commit more deeply once they have ventured an opinion (albeit silently).

    Future:

    Assessment for instant questions is presently only available for multiple choice questions.    We are working on incorporating assessment into the instant hotspot question type as well, so that on receiving a groups’ Result (being a range of selected locations on the image), the Facilitator may circle (or “lasso”) a sub-set of the range and set that to “correct”.

    About instant questions:

    Xorro-Q’s instant questioning functionality is lauded by educators for its ease of use and ease of access.  Whatever the context (eg slide show, video, speech), a “Facilitator” can spontaneously “ask” a question without breaking away from the flow of the presentation.  The the audience can just as instantly answer the question using their own devices.  A range of question types is supported, including multiple choice, numeric answer, text answer, and even hotspot clickable images.  The grouped responses can be displayed on the screen , and selected responses can be marked as “correct”.  Every response is awarded a participation point, and “correct” responses can be awarded additional scores as well.

    By being given “safe” opportunities to participate in class, audience members take a much closer interest in the topic. Being able to safely and anonymously experiment, venture their opinions or ideas, and use their existing knowledge to attempt or guess responses, participants can learn instantly from feedback as well as gain motivation from observing how their responses compared with those of peers.

    Relevant Xorro articles:  Get started with Instant Activities

  • Q’s Numeric and Text Questions become Tolerant

    Q’s Numeric and Text Questions become Tolerant

    One of the compromises in automatically assessed numeric (and text) response questions is that they demand exact responses from participants.  For example, in asking a group of participants “Who was the leader of Nazi Germany?” – we might demand “Adolf Hitler” as the “correct” answer, but we will in fact receive variations such as “Hitler”, “AdolfHitler”, Adolf hitler”,  “Hitler”, “hitler”, “A.Hitler”, and so on.  Most of us would be inclined to accept several of these responses as correct or at least partially correct.  The same  often applies to the numeric answer to a multi-step calculation.  What shall we accept as the perimeter of a circle with radius 3cm?  A “correct” score depends on the participant’s choice of significant places, and of units.  In the past, the range of acceptable alternative answers makes the automated assessment of these types of questions problematic.  No longer:  Xorro’s latest text and numeric questions now provide for a range of possible responses, and for tolerance of variations in the submitted response “string”.

    Numeric response questions:

    When creating (or editing) a numeric response question, the author can now specify an acceptable tolerance for the response.  This can be a % value, or it can be set as an incremental value of the target “answer” value.  In addition, the author can set a preferance for such tolerances, which will then apply by default to all of that author’s questions (except where over-ridden by the author at question level).  A new feature permits the author to set a prefix and/or a suffix to the answerr field on the participant screen.  A prefix might be (for example) “$” or “US$”, while a suffix might be “.00”, “N”, “kN”, or “%”.  These make clear to the respondent the intended format of the response: the units being used,  perhaps the significance level to be applied.

    EXAMPLE: Consider the question: “Using a discount rate of 12% per annum, calculate the Net Present Value of a payment of £35,000 at the end of 36 months from today.”

    In setting the correct answer ($24,707) the author might decide to permit a variation of +/-0.1% which will recognise those respondents who enter $24,000.  By setting a prefix “£” to display in front of the answer field, the respondent is encouraged to not enter the “£” character in the answer.

    Text response questions:

    When setting the model (“correct”) answer for a text response question, authors are no longer restricted to a single “string” of text.  Now, multiple alternate “answer terms” can be listed, each of which may or may not be identified as “correct”.  Each specified term can have feedback associated with it, and each may attract a score if used by the participant.   In addition, the author can choose the rigour to be applied to grammar: for example whether to enforce capitalisation, punctuation and use of spaces.   Lastly, the mere inclusion of a taregt string in a participant’s response can be deemed sufficient, for example (in replying to the question in the first paragraph) a participant who answers “”probably Hitler” would still get a positive score associated with the term “Hitler”.

    EXAMPLE:

    Consider the question “What is the term used to describe a graph of the form y=ax2+bx+c ?”

    Specified terms might be: “parabola” (correct, 5 points); “parabolic” (correct, 5 points); “quadratic” (partially correct, 3 points); “hyperbola” (incorrect, 1 point); “line” (incorrect, 1 point).  Each of these might have specific feedback for the respondent.  There might also be feedback and a score offered for the case where none of these terms features in an answer.  By default, a participant who enters “linear parabola cubic hyperbola” would not get any points; otherwise the author might choose to allow mere inclusion of the word “parabola” in the submission to attract its points.