From version < 11.2 >
edited by Randell Greenlee
on 2021/11/09 16:51
To version < 16.1
edited by Ivan Kharitonov
on 2022/01/30 17:25
<
Change comment: There is no comment for this version

Summary

Details

Page properties
Title
... ... @@ -1,1 +1,1 @@
1 -Oral Examination
1 +Written Test - Open Answers
Author
... ... @@ -1,1 +1,1 @@
1 -XWiki.RandellGreenlee
1 +XWiki.IvanKharitonov
Content
... ... @@ -1,12 +5,9 @@
1 -{{box cssClass="floatinginfobox" title="**Contents**"}}
2 -{{toc/}}
3 -{{/box}}
4 -
5 5  = Description =
6 6  
7 -The simulated environment reflects a real live situation, but is standardised. This makes it possible to build in incentives for behaviour or choices. The situation can be a "copy" of a real live situation, but also a roleplay (for more behaviour skills). The candidate is observed in this simulated situation.
3 +Open questions are all questions for which there are no answer options. This means that there is no influence from given answer options. In the written test, the questions are answered in writing. Open questions can be used to query information in text form or numerical information (e.g. number of minutes). Open questions can be used, for example, if the understanding of a situation is to be assessed or if the number of answer options would be too large.
8 8  
9 -This method is used for skills that can be shown in the workspace. The assessment method allows to test very specific competences, as the environment can be controlled. Mainly for practical, observable skills.
5 +Case studies can be recorded as a sub-form of the written test (open answers). They can be used to simulate everyday working life and the tasks associated with it. The tasks to be solved address a problem common to the industry. Here, analytical and organizational competencies, such as the approach to a difficult problem, are tested.
6 +Case studies measure action orientation, entrepreneurial thinking and understanding of complexity.
10 10  
11 11  ----
12 12  
... ... @@ -14,18 +14,19 @@
14 14  
15 15  === Validity ===
16 16  
17 -Since all factors are under control, the internal validity of this method is high. The method excludes unpredictability of situation and environment. So, it is easier to ensure safety. Very specific competencies can be tested. Since the behavior of people can change as a result of the observation situation (Hawthorne effect), internal validity is also threatened. This effect can be partially reduced if the work situation is only filmed (indirect observation). Since it is less a real-life situation, the external validity (transferability) of the observed behavior is lower. A good test will reflect real life situations in a controlled environment as much as possible.
14 +Open questions are used to check knowledge or situational interpretation. The disadvantage is that it checks more the skill to express yourself on paper than it checks the real ability to perform in real life. It proves your know how to act, but not that you are able to act. Answers are checked against a checklist but need interpretation of skilled assessors. 
15 +Open answers are most suitable in situations, where new information should be gained, where respondents should not be primed by the given response-possibilities or where holistic feedback is asked and given answers would significantly limit the informative value of the answer. 
18 18  
19 19  === Reliability ===
20 20  
21 -The quality of simulated environment observation depends on the accuracy and repeatability of the test setup.
22 -Simulated environments guarantee equal treatment of candidates, the result should be identical, wherever and by whatever assessors they are conducted. Therefore every candidate is assessed in an identical situation.
23 -One of the elements in this are well trained assessors and a levelling system, this avoids that assessment would be biased by assessors influenced by previous tests or looking outside the competences to for example not occupation related behaviour.
24 -The reliability is increased by the possibility to easily develop exact observable criteria.
19 +Tests can be intimidating for people who have had bad experiences with these types of tests in previous learning contexts.
20 +Research has shown that dissatisfied people give longer answers to express their dissatisfaction. Thus, the respondent's mood influences the length of the response, which limits reliability.
21 +Different field sizes for the answer to the same question affect reliability.
25 25  
26 26  == Limitations ==
27 27  
28 -Development of a assessment set-up is time consuming.
25 +Since a high degree of formulation competence is required to answer open questions, a poor result cannot necessarily be inferred from an inadequate result.
26 +Open questions are not suitable for measuring practical skills. They are only of limited help when assessing social skills.
29 29  
30 30  ----
31 31  
... ... @@ -33,25 +33,18 @@
33 33  
34 34  == Tips ==
35 35  
36 -Organise the test in a way that the candidate feels at ease. If it is a tradition to have a cup of coffee at the start of a working day, include this in the startup of the test.
37 -Give the candidate time to discover the situation.
38 -Do not built in traps or tricky situations that hardly ever occur in real life.
39 -Be clear and open about the role and activities of the observers. Attention points can be:
34 +The questions should be clearly formulated. It must be clear to the candidate what form of answer is expected (e.g. bullet points, small essay, several details).
35 +With extensive case studies, it takes more time to analyze the text and answer the question.
36 +In order to reduce the chance of accidental hits and thus increase reliability, there should be several independent observation options for each requirement dimension.
40 40  
41 -* Observers write also about positive points.
42 -* Observers are silent, because they keep a distance.
43 -* Observers will only stop the test in case of danger or overtime.
44 -
45 45  == Traps ==
46 46  
47 -If the candidate needs support, the assistant must be trained to limit the intervention to what the candidate requires and not (as we would do in reality) to take over the decision-making process or be proactive.
48 -There is a risk that the assessor is biased. That is why assessors should be professionals from the field of competences being assessed. Assessors have to be aware that there are different methods to perform a specific task and should take distance from one prefered method, for as far as the goal is reached.
40 +The field sizes for the response should be adjusted according to the expected scope. A reasonable amount of time should be calculated to answer the question.
49 49  
50 50  == Scoring Tools ==
51 51  
52 -Observing can be done through a list of observable criteria. The criteria should be derived from the sectoral layer skills, in other words, they are a concretisation of the visible, observable result of the skill in a specific situation.
53 -As the situation is always identical, the scoring tool can be very specific and leave little room for interpretation.
54 -The final decision is made based on the link of the criteria with the competence and by comparing the observations of the different assessors.
44 +Correction keys can be in place (what the assessors want to see) in order to enable the assessment. 
45 +To assess the case study, the assessor uses a model solution (chronologically based on the items in the case study) and an observation sheet (sorted by competencies). The answers given are compared with the model solution. In order to ensure the evaluation objectivity, especially creative answers do not receive additional points. If the required answer has not been given, the candidate can be asked per item. The ticks are then added and entered on an appropriate scale. Finally, the assessors compare the results of their observations with each other in order to record an overall result.
55 55  
56 56  ----
57 57  
... ... @@ -59,47 +59,31 @@
59 59  
60 60  == Information for Standard ==
61 61  
62 -The standard must describe the specific situations, incentives and expected complexity of the skills to be assessed.
53 +The written test with open answer is difficult to standardize. An evaluation of the answers by several assessors can increase the validity of the results. A marking guide should allow a wide range of possible answers without losing the professionalism of the test.
63 63  
64 64  == Development ==
65 65  
66 -The development of an observation in a simulated environment starts with the analysis of the skills that need to be evaluated. Since not every skill can be tested in all variations, representative situations are chosen to reflect the mastery of the general skill. The skills are built into a well-chosen scenario that reflects a real-life experience, but also integrates behavioural incentives and choices. The candidate is asked to perform a task, but the environment limits or alters the way the task is performed. In this way, the candidate must make his/her own decisions.
67 -The activities should reflect different contexts. Often a skill or behavior is built in twice to improve reliability and avoid "false positives".
68 -Assessment facilities must be tested and updated before they are used with "real" candidates.
57 +The questions should be designed in such a way that it is transparent what the scope of the answers should be. The predefined text field can provide the candidate with information about the scope of the answer. The questions should be derived from the UNITs of the competences to be measured. In the assessment, care should be taken not to assess linguistic expression.
69 69  
70 70  == Needs/Set-Up ==
71 71  
72 -This is an observation in a “real life” professional setting. It must be organized as a normal day in the life of the candidate (= working day). One assessor could be acting as a “colleague” the other would assess from a distance. There could also be trained “colleagues” (must not have an assessor qualification), who “work with” the candidate in the observation environment. This is only necessary when a colleague is “physically” necessary to assess the competence at hand. One assessor can't oversee all activities, idealy there are at least two assessors, one who is observing from a distance and a second one observing close.
73 -Technical competence is relatively easy to assess. Knowledge behind the action can be assessed in most cases, if the test is prepared in the proper way. Competences are tested in the “group” working environment, as it is in reality. Several competences can almost always be assessed at one time. The proper atmosphere is very important.
74 -The assessments could be done at educational institutions with the necessary equipment.
61 +Besides pen and paper, a computer can also be used to answer the questions.
75 75  
76 76  == Requirements for Assessors ==
77 77  
78 -Assessors need competences for valid observations, such as those that can be acquired in observer training courses. They should have a basic knowledge of diagnostics, be able to deal with perceptual effects (e.g. errors of observation and assessment) and be able to recognize their own subjectivity. A professional competence is essential for the evaluation of the candidate's performance against the background of the assessment standard. It is also needed to construct a work situation appropriate to the competences to be assessed.
65 +Assessors need comprehensive skills to evaluate complex texts without bias. They must be able to identify content and professional skills despite a lack of articulation. The assessment of answers requires in-depth professional expertise.
79 79  
80 80  == Examples ==
81 81  
82 -For the skill "Working on heights" a candidate should perform several activities on ladders, scaffolding, Based on a checklist, his/her behaviour is observed.
69 +A case study is possible in which the candidate describes how he would react in the event of an accident at work.
83 83  
84 84  == In Combination with ==
85 85  
86 -This Method can be combined with a criterion focused interviews to fill the gaps or skills that have not been observed (not negative or positive). It can be combined with a multiple choice or open answer test for knowledge that is not made visible in practice.
73 +Since the test does not test practical skills, it should be combined with Role Play or Observation, for example. The test can be combined with a multiple choice test to test more factual knowledge.
87 87  
88 88  = References/Notes =
89 89  
90 -* Catalogus Assessmentmethodes voor EVC, Agentschap Hoger Onderwijs, volwassenenonderwijs, Kwalificaties en Studietoelagen, Ministery of education and training of the Flemish community (2015). Online: [[http:~~/~~/www.erkennenvancompetenties.be/evc-professionals/evc-toolbox/bestanden/catalogus-assessmentmethodes-evc-2015.pdf>>http://www.erkennenvancompetenties.be/evc-professionals/evc-toolbox/bestanden/catalogus-assessmentmethodes-evc-2015.pdf]]  (last 17.08.2020)
91 -* Jhpiego (2011): Simulation Training for Educators of Health Care Workers. Online: [[http:~~/~~/reprolineplus.org/system/files/resources/simulation_facilitatorsguide.pdf>>http://reprolineplus.org/system/files/resources/simulation_facilitatorsguide.pdf]]  (last 05.08.2020)
92 -* Multiprofessional Faculty Development (2012): Teaching and Learning in Simulated Environments. Online: [[https:~~/~~/faculty.londondeanery.ac.uk/e-learning/teaching-clinical-skills/teaching-and-learning-in-simulated-environments>>https://faculty.londondeanery.ac.uk/e-learning/teaching-clinical-skills/teaching-and-learning-in-simulated-environments]]  (last 05.08.2020)
93 -* Scottish Qualifications Authority (2019): Guide to Assessment. Online: [[https:~~/~~/www.sqa.org.uk/files_ccc/Guide_To_Assessment.pdf>>https://www.sqa.org.uk/files_ccc/Guide_To_Assessment.pdf]]  (05.08.2020)
94 -* Vincent-Lambert, C. / Bogossian, F. (2006): A guide for the assessment of
95 -* clinical competence using simulation. Online: [[https:~~/~~/pdfs.semanticscholar.org/bda7/dae4871a49e19fd2cc186823379518e39192.pdf>>https://pdfs.semanticscholar.org/bda7/dae4871a49e19fd2cc186823379518e39192.pdf]]  (last 05.08.2020)
96 -
97 -== AT ==
98 -
99 -== BE ==
100 -
101 -== DE ==
102 -
103 -== IT ==
104 -
105 -== NL ==
77 +* CEDEFOP (2016): Europäische Leitlinien für die Validierung nicht formalen und informellen Lernens. Luxemburg: Amt für Veröffentlichungen der Europäischen Union.
78 +* Eck, C. et al. (2016): Assessment-Center. Entwicklung und Anwendung – mit 57 AC-Aufgaben und Checklisten zum Downloaden und Bearbeiten im Internet. 3. Aufl. Berlin / Heidelberg: Springer.
79 +* Obermann, C. (2018): Assessment Center. Entwicklung, Durchführung, Trends. Mit neuen originalen AC-Übungen. 6., vollständig überarb. u. erw. Aufl. Wiesbaden: Springer Fachmedien.
80 +* Züll, G. (2015): Offene Fragen. Hannover: Leibniz-Institut für Sozialwissenschaften. Online: [[https:~~/~~/www.gesis.org/fileadmin/upload/SDMwiki/Archiv/Offene_Fragen_Zuell_012015_1.0.pdf>>https://www.gesis.org/fileadmin/upload/SDMwiki/Archiv/Offene_Fragen_Zuell_012015_1.0.pdf]].