From version < 10.1 >
edited by Randell Greenlee
on 2021/11/09 09:46
To version < 2.1 >
edited by Chris Van Goethem
on 2020/12/25 13:35
< >
Change comment: There is no comment for this version

Summary

Details

Page properties
Parent
... ... @@ -1,1 +1,1 @@
1 -01 Sectoral Layer.Sectoral Layer Glossary.WebHome
1 +Main.Sectoral Layer.Sectoral Layer Glossary.WebHome
Author
... ... @@ -1,1 +1,1 @@
1 -XWiki.RandellGreenlee
1 +XWiki.ChrisVanGoethem
Content
... ... @@ -1,105 +1,8 @@
1 1  {{box cssClass="floatinginfobox" title="**Contents**"}}
2 2  {{toc/}}
3 3  {{/box}}
4 +== Description ==
4 4  
5 -= Description =
6 -
7 7  The simulated environment reflects a real live situation, but is standardised. This makes it possible to build in incentives for behaviour or choices. The situation can be a "copy" of a real live situation, but also a roleplay (for more behaviour skills). The candidate is observed in this simulated situation.
8 8  
9 9  This method is used for skills that can be shown in the workspace. The assessment method allows to test very specific competences, as the environment can be controlled. Mainly for practical, observable skills.
10 -
11 -----
12 -
13 -= Quality Concepts =
14 -
15 -=== Validity ===
16 -
17 -Since all factors are under control, the internal validity of this method is high. The method excludes unpredictability of situation and environment. So, it is easier to ensure safety. Very specific competencies can be tested. Since the behavior of people can change as a result of the observation situation (Hawthorne effect), internal validity is also threatened. This effect can be partially reduced if the work situation is only filmed (indirect observation). Since it is less a real-life situation, the external validity (transferability) of the observed behavior is lower. A good test will reflect real life situations in a controlled environment as much as possible.
18 -
19 -=== Reliability ===
20 -
21 -The quality of simulated environment observation depends on the accuracy and repeatability of the test setup.
22 -Simulated environments guarantee equal treatment of candidates, the result should be identical, wherever and by whatever assessors they are conducted. Therefore every candidate is assessed in an identical situation.
23 -One of the elements in this are well trained assessors and a levelling system, this avoids that assessment would be biased by assessors influenced by previous tests or looking outside the competences to for example not occupation related behaviour.
24 -The reliability is increased by the possibility to easily develop exact observable criteria.
25 -
26 -== Limitations ==
27 -
28 -Development of a assessment set-up is time consuming.
29 -
30 -----
31 -
32 -= Considerations =
33 -
34 -== Tips ==
35 -
36 -Organise the test in a way that the candidate feels at ease. If it is a tradition to have a cup of coffee at the start of a working day, include this in the startup of the test.
37 -Give the candidate time to discover the situation.
38 -Do not built in traps or tricky situations that hardly ever occur in real life.
39 -Be clear and open about the role and activities of the observers. Attention points can be:
40 -
41 -* Observers write also about positive points.
42 -* Observers are silent, because they keep a distance.
43 -* Observers will only stop the test in case of danger or overtime.
44 -
45 -== Traps ==
46 -
47 -If the candidate needs support, the assistant must be trained to limit the intervention to what the candidate requires and not (as we would do in reality) to take over the decision-making process or be proactive.
48 -There is a risk that the assessor is biased. That is why assessors should be professionals from the field of competences being assessed. Assessors have to be aware that there are different methods to perform a specific task and should take distance from one prefered method, for as far as the goal is reached.
49 -
50 -== Scoring Tools ==
51 -
52 -Observing can be done through a list of observable criteria. The criteria should be derived from the sectoral layer skills, in other words, they are a concretisation of the visible, observable result of the skill in a specific situation.
53 -As the situation is always identical, the scoring tool can be very specific and leave little room for interpretation.
54 -The final decision is made based on the link of the criteria with the competence and by comparing the observations of the different assessors.
55 -
56 -----
57 -
58 -= Implementation =
59 -
60 -== Information for Standard ==
61 -
62 -The standard must describe the specific situations, incentives and expected complexity of the skills to be assessed.
63 -
64 -== Development ==
65 -
66 -The development of an observation in a simulated environment starts with the analysis of the skills that need to be evaluated. Since not every skill can be tested in all variations, representative situations are chosen to reflect the mastery of the general skill. The skills are built into a well-chosen scenario that reflects a real-life experience, but also integrates behavioural incentives and choices. The candidate is asked to perform a task, but the environment limits or alters the way the task is performed. In this way, the candidate must make his/her own decisions.
67 -The activities should reflect different contexts. Often a skill or behavior is built in twice to improve reliability and avoid "false positives".
68 -Assessment facilities must be tested and updated before they are used with "real" candidates.
69 -
70 -== Needs/Set-Up ==
71 -
72 -This is an observation in a “real life” professional setting. It must be organized as a normal day in the life of the candidate (= working day). One assessor could be acting as a “colleague” the other would assess from a distance. There could also be trained “colleagues” (must not have an assessor qualification), who “work with” the candidate in the observation environment. This is only necessary when a colleague is “physically” necessary to assess the competence at hand. One assessor can't oversee all activities, idealy there are at least two assessors, one who is observing from a distance and a second one observing close.
73 -Technical competence is relatively easy to assess. Knowledge behind the action can be assessed in most cases, if the test is prepared in the proper way. Competences are tested in the “group” working environment, as it is in reality. Several competences can almost always be assessed at one time. The proper atmosphere is very important.
74 -The assessments could be done at educational institutions with the necessary equipment.
75 -
76 -== Requirements for Assessors ==
77 -
78 -Assessors need competences for valid observations, such as those that can be acquired in observer training courses. They should have a basic knowledge of diagnostics, be able to deal with perceptual effects (e.g. errors of observation and assessment) and be able to recognize their own subjectivity. A professional competence is essential for the evaluation of the candidate's performance against the background of the assessment standard. It is also needed to construct a work situation appropriate to the competences to be assessed.
79 -
80 -== Examples ==
81 -
82 -For the skill "Working on heights" a candidate should perform several activities on ladders, scaffolding, … Based on a checklist, his/her behaviour is observed.
83 -
84 -== In Combination with ==
85 -
86 -This Method can be combined with a criterion focused interviews to fill the gaps or skills that have not been observed (not negative or positive). It can be combined with a multiple choice or open answer test for knowledge that is not made visible in practice.
87 -
88 -= References/Notes =
89 -
90 -* Catalogus Assessmentmethodes voor EVC, Agentschap Hoger Onderwijs, volwassenenonderwijs, Kwalificaties en Studietoelagen, Ministery of education and training of the Flemish community (2015). Online: [[http:~~/~~/www.erkennenvancompetenties.be/evc-professionals/evc-toolbox/bestanden/catalogus-assessmentmethodes-evc-2015.pdf>>http://www.erkennenvancompetenties.be/evc-professionals/evc-toolbox/bestanden/catalogus-assessmentmethodes-evc-2015.pdf]]  (last 17.08.2020)
91 -* Jhpiego (2011): Simulation Training for Educators of Health Care Workers. Online: [[http:~~/~~/reprolineplus.org/system/files/resources/simulation_facilitatorsguide.pdf>>http://reprolineplus.org/system/files/resources/simulation_facilitatorsguide.pdf]]  (last 05.08.2020)
92 -* Multiprofessional Faculty Development (2012): Teaching and Learning in Simulated Environments. Online: [[https:~~/~~/faculty.londondeanery.ac.uk/e-learning/teaching-clinical-skills/teaching-and-learning-in-simulated-environments>>https://faculty.londondeanery.ac.uk/e-learning/teaching-clinical-skills/teaching-and-learning-in-simulated-environments]]  (last 05.08.2020)
93 -* Scottish Qualifications Authority (2019): Guide to Assessment. Online: [[https:~~/~~/www.sqa.org.uk/files_ccc/Guide_To_Assessment.pdf>>https://www.sqa.org.uk/files_ccc/Guide_To_Assessment.pdf]]  (05.08.2020)
94 -* Vincent-Lambert, C. / Bogossian, F. (2006): A guide for the assessment of
95 -* clinical competence using simulation. Online: [[https:~~/~~/pdfs.semanticscholar.org/bda7/dae4871a49e19fd2cc186823379518e39192.pdf>>https://pdfs.semanticscholar.org/bda7/dae4871a49e19fd2cc186823379518e39192.pdf]]  (last 05.08.2020)
96 -
97 -== AT ==
98 -
99 -== BE ==
100 -
101 -== DE ==
102 -
103 -== IT ==
104 -
105 -== NL ==