From version < 13.1 >
edited by Randell Greenlee
on 2021/11/09 17:06
To version < 14.1 >
edited by Randell Greenlee
on 2021/11/09 17:11
< >
Change comment: There is no comment for this version

Summary

Details

Page properties
Title
... ... @@ -1,1 +1,1 @@
1 -Written Test - Multiple Choice
1 +Written Test - Open Answers
Content
... ... @@ -4,13 +4,10 @@
4 4  
5 5  = Description =
6 6  
7 -The multiple-choice test consists of questions where only one answer can be correct. In contrast, the multiple-response tests consists of questions where several answers can be correct. Neither craft skills nor creativity can be measured with either type of test. Nevertheless, it is not only possible to query factual knowledge. Unless otherwise stated, both test types are summarized below under the term "multiple-choice test". Following cognitive performance levels can be tested via multiple-choice tests:
7 +Open questions are all questions for which there are no answer options. This means that there is no influence from given answer options. In the written test, the questions are answered in writing. Open questions can be used to query information in text form or numerical information (e.g. number of minutes). Open questions can be used, for example, if the understanding of a situation is to be assessed or if the number of answer options would be too large.
8 8  
9 -* Reproduction of stored knowledge
10 -* Reorganization: Learned knowledge is processed and arranged independently.
11 -* Transfer: Basic principles are transferred to new, similar tasks.
12 -* Problem-solving thinking: Tasks with new questions and aspects are solved.
13 -
9 +Case studies can be recorded as a sub-form of the written test (open answers). They can be used to simulate everyday working life and the tasks associated with it. The tasks to be solved address a problem common to the industry. Here, analytical and organizational competencies, such as the approach to a difficult problem, are tested.
10 +Case studies measure action orientation, entrepreneurial thinking and understanding of complexity.
14 14  
15 15  ----
16 16  
... ... @@ -18,15 +18,19 @@
18 18  
19 19  === Validity ===
20 20  
21 -The validity is not necessarily present. The main problem is the discrepancy between complex cognitive processes and simply ticking an answer. Multiple-Choice Tests can evaluate a competence to act if they require the quick assessment of different facets of a complex subject.
18 +Open questions are used to check knowledge or situational interpretation. The disadvantage is that it checks more the skill to express yourself on paper than it checks the real ability to perform in real life. It proves your know how to act, but not that you are able to act. Answers are checked against a checklist but need interpretation of skilled assessors. 
19 +Open answers are most suitable in situations, where new information should be gained, where respondents should not be primed by the given response-possibilities or where holistic feedback is asked and given answers would significantly limit the informative value of the answer.
22 22  
23 23  === Reliability ===
24 24  
25 -The objectivity of Multiple-Choice Tests is significantly higher than for classic written exams. The reason for this is that the correction is independent of the correcting person and is often automated. In addition, well-formulated questions go hand in hand with high reliability. If the tests are carried out regularly and with a large number of test persons, they are particularly economical.
23 +Tests can be intimidating for people who have had bad experiences with these types of tests in previous learning contexts.
24 +Research has shown that dissatisfied people give longer answers to express their dissatisfaction. Thus, the respondent's mood influences the length of the response, which limits reliability.
25 +Different field sizes for the answer to the same question affect reliability.
26 26  
27 27  == Limitations ==
28 28  
29 -Creativity or craft skills cannot be tested with multiple choice tests.
29 +Since a high degree of formulation competence is required to answer open questions, a poor result cannot necessarily be inferred from an inadequate result.
30 +Open questions are not suitable for measuring practical skills. They are only of limited help when assessing social skills.
30 30  
31 31  ----
32 32  
... ... @@ -34,15 +34,18 @@
34 34  
35 35  == Tips ==
36 36  
37 -To reduce the possibility of scoring by gambling, the alternative answers should be of high quality. According to studies, three answer options are sufficient to make successful gambling unlikely.
38 +The questions should be clearly formulated. It must be clear to the candidate what form of answer is expected (e.g. bullet points, small essay, several details).
39 +With extensive case studies, it takes more time to analyze the text and answer the question.
40 +In order to reduce the chance of accidental hits and thus increase reliability, there should be several independent observation options for each requirement dimension.
38 38  
39 39  == Traps ==
40 40  
41 -Disadvantages of too many alternative answers are a greater effort in the designing, risk of not plausible distractors (distractor = false answer option) and revealing of context hints for other tasks. Depending on the task, five or more answer options can also be useful.
44 +The field sizes for the response should be adjusted according to the expected scope. A reasonable amount of time should be calculated to answer the question.
42 42  
43 43  == Scoring Tools ==
44 44  
45 -A major advantage of Multiple-Choice / response tests (short: MC-tests) is that they can be evaluated easily. In addition, the quality of particular questions can be determined statistically. Bad formulated questions can be recognized as such. As already mentioned, only one answer can be correct for Multiple-Choice questions. If the right answer is marked, one point will be awarded. If a wrong answer or no answer is marked, no point will be awarded. The awarding of points can be different for Multiple-Response tests. Every correct marked answer scores one point (right answers marked, or false answers not marked). Every incorrect marked answer brings one minus point (right answers not marked or false answers marked). The difference determines the total score of the question. If the difference is negative, the score is 0 Points.
48 +Correction keys can be in place (what the assessors want to see) in order to enable the assessment. 
49 +To assess the case study, the assessor uses a model solution (chronologically based on the items in the case study) and an observation sheet (sorted by competencies). The answers given are compared with the model solution. In order to ensure the evaluation objectivity, especially creative answers do not receive additional points. If the required answer has not been given, the candidate can be asked per item. The ticks are then added and entered on an appropriate scale. Finally, the assessors compare the results of their observations with each other in order to record an overall result.
46 46  
47 47  ----
48 48  
... ... @@ -50,37 +50,34 @@
50 50  
51 51  == Information for Standard ==
52 52  
53 -For a high degree of standardization, the test sheets should be formulated in a clear and understandable way and used without modification. Language barriers should be taken into account.
57 +The written test with open answer is difficult to standardize. An evaluation of the answers by several assessors can increase the validity of the results. A marking guide should allow a wide range of possible answers without losing the professionalism of the test.
54 54  
55 55  == Development ==
56 56  
57 -When preparing the test, high quality distractors should be chosen that are plausible without fooling the candidate. The test should be focused on the competencies to be measured and not just on smartness. A solution sheet gives the necessary information about the correct answers.
61 +The questions should be designed in such a way that it is transparent what the scope of the answers should be. The predefined text field can provide the candidate with information about the scope of the answer. The questions should be derived from the UNITs of the competences to be measured. In the assessment, care should be taken not to assess linguistic expression.
58 58  
59 59  == Needs/Set-Up ==
60 60  
61 -The test can be performed either with a pen and paper or electronically on a computer.
65 +Besides pen and paper, a computer can also be used to answer the questions.
62 62  
63 63  == Requirements for Assessors ==
64 64  
65 -The design of the test requires a high level of technical expertise in order to develop useful distractors. For the evaluation of the test no special skills are required.
69 +Assessors need comprehensive skills to evaluate complex texts without bias. They must be able to identify content and professional skills despite a lack of articulation. The assessment of answers requires in-depth professional expertise.
66 66  
67 67  == Examples ==
68 68  
69 -For example, a multiple choice test can be used to test knowledge about the use of special effects.
73 +A case study is possible in which the candidate describes how he would react in the event of an accident at work.
70 70  
71 71  == In Combination with ==
72 72  
73 -Written Test - Open Answers
77 +Since the test does not test practical skills, it should be combined with Role Play or Observation, for example. The test can be combined with a multiple choice test to test more factual knowledge.
74 74  
75 75  = References/Notes =
76 76  
77 -* Baghaei, P. / Amrahi, N. (2011): The effects of the number of options on the psychometric characteristics of Multiple-Choice items. In: Psychological Test and Assessment Modeling. 53 (2), p. 192-211.
78 -* CTL Center for Teaching and Learning / Universität Wien (2015): Erstellen von schriftlichen, mündlichen und Multiple-Choice-Prüfungen. Online: [[https:~~/~~/ctl.univie.ac.at/fileadmin/user_upload/z_ctl/Qualitaet_von_Studien/Qualitaet_von_Pruefungen/170314_Erstellen_von_schriftlichen_muendlichen_MC_Pruefungen.pdf >>https://ctl.univie.ac.at/fileadmin/user_upload/z_ctl/Qualitaet_von_Studien/Qualitaet_von_Pruefungen/170314_Erstellen_von_schriftlichen_muendlichen_MC_Pruefungen.pdf]](last 25.05.2020).
79 -* Elsa / Leibniz Universität Hannover (n.d.): Erstellen und Bewerten von Multiple-Choice-Aufgaben. Online: [[https:~~/~~/www.zqs.uni-hannover.de/fileadmin/zqs/PDF/E-Learning/elsa_handreichung_zum_erstellen_und_bewerten_von_mc-fragen_2013.pdf >>https://www.zqs.uni-hannover.de/fileadmin/zqs/PDF/E-Learning/elsa_handreichung_zum_erstellen_und_bewerten_von_mc-fragen_2013.pdf]](last: 25.05.2020).
80 -* Kubinger, K. (2014): Gutachten zur Erstellung “gerichtsfester” Multiple-Choice-Prüfungsaufgaben. In: Psychologische Rundschau. 65 (3), p. 169-178.
81 -* Rodriguez, M. (2005): Three Options Are Optimal for Multiple-Choice Items: A Meta-Analysis of 80 Years of Research. In: Educational Measurement. Issues and Practice. 24 (2), p. 3-13.
82 -* Universität Kassel (n.d.): Handreichung für Klausuren mit Aufgaben nach dem Antwort-Wahl-Verfahren (Single-Choice/Multiple-Choice). Online: [[http:~~/~~/www.uni-kassel.de/einrichtungen/fileadmin/datas/einrichtungen/scl/E-Klausuren/Handreichung_Antwort_Wahl_Aufgaben_final.pdf>>http://www.uni-kassel.de/einrichtungen/fileadmin/datas/einrichtungen/scl/E-Klausuren/Handreichung_Antwort_Wahl_Aufgaben_final.pdf]] (last: 25.05.2020).
83 -* Universität Zürich (n.d.): Hochschuldidaktik A – Z. Multiple-Choice-Prüfungen. Online: [[http:~~/~~/www.hochschuldidaktik.uzh.ch/dam/jcr:ffffffff-9a08-8cca-0000-00002cfe461f/A_Z_Multiple-Choice.pdf >>http://www.hochschuldidaktik.uzh.ch/dam/jcr:ffffffff-9a08-8cca-0000-00002cfe461f/A_Z_Multiple-Choice.pdf]](last: 02.06.2020).
81 +* CEDEFOP (2016): Europäische Leitlinien für die Validierung nicht formalen und informellen Lernens. Luxemburg: Amt für Veröffentlichungen der Europäischen Union.
82 +* Eck, C. et al. (2016): Assessment-Center. Entwicklung und Anwendung – mit 57 AC-Aufgaben und Checklisten zum Downloaden und Bearbeiten im Internet. 3. Aufl. Berlin / Heidelberg: Springer.
83 +* Obermann, C. (2018): Assessment Center. Entwicklung, Durchführung, Trends. Mit neuen originalen AC-Übungen. 6., vollständig überarb. u. erw. Aufl. Wiesbaden: Springer Fachmedien.
84 +* Züll, G. (2015): Offene Fragen. Hannover: Leibniz-Institut für Sozialwissenschaften. Online: [[https:~~/~~/www.gesis.org/fileadmin/upload/SDMwiki/Archiv/Offene_Fragen_Zuell_012015_1.0.pdf>>https://www.gesis.org/fileadmin/upload/SDMwiki/Archiv/Offene_Fragen_Zuell_012015_1.0.pdf]].
84 84  
85 85  == AT ==
86 86