Description

Open questions are all questions for which there are no answer options. This means that there is no influence from given answer options. In the written test, the questions are answered in writing. Open questions can be used to query information in text form or numerical information (e.g. number of minutes). Open questions can be used, for example, if the understanding of a situation is to be assessed or if the number of answer options would be too large.

Case studies can be recorded as a sub-form of the written test (open answers). They can be used to simulate everyday working life and the tasks associated with it. The tasks to be solved address a problem common to the industry. Here, analytical and organizational competencies, such as the approach to a difficult problem, are tested.
Case studies measure action orientation, entrepreneurial thinking and understanding of complexity.


Quality Concepts

Validity

Open questions are used to check knowledge or situational interpretation. The disadvantage is that it checks more the skill to express yourself on paper than it checks the real ability to perform in real life. It proves your know how to act, but not that you are able to act. Answers are checked against a checklist but need interpretation of skilled assessors. 
Open answers are most suitable in situations, where new information should be gained, where respondents should not be primed by the given response-possibilities or where holistic feedback is asked and given answers would significantly limit the informative value of the answer.  

Reliability

Tests can be intimidating for people who have had bad experiences with these types of tests in previous learning contexts.
Research has shown that dissatisfied people give longer answers to express their dissatisfaction. Thus, the respondent's mood influences the length of the response, which limits reliability.
Different field sizes for the answer to the same question affect reliability.

Limitations

Since a high degree of formulation competence is required to answer open questions, a poor result cannot necessarily be inferred from an inadequate result.
Open questions are not suitable for measuring practical skills. They are only of limited help when assessing social skills.


Considerations

Tips

The questions should be clearly formulated. It must be clear to the candidate what form of answer is expected (e.g. bullet points, small essay, several details).
With extensive case studies, it takes more time to analyze the text and answer the question.
In order to reduce the chance of accidental hits and thus increase reliability, there should be several independent observation options for each requirement dimension.

Traps

The field sizes for the response should be adjusted according to the expected scope. A reasonable amount of time should be calculated to answer the question.

Scoring Tools

Correction keys can be in place (what the assessors want to see) in order to enable the assessment. 
To assess the case study, the assessor uses a model solution (chronologically based on the items in the case study) and an observation sheet (sorted by competencies). The answers given are compared with the model solution. In order to ensure the evaluation objectivity, especially creative answers do not receive additional points. If the required answer has not been given, the candidate can be asked per item. The ticks are then added and entered on an appropriate scale. Finally, the assessors compare the results of their observations with each other in order to record an overall result.


Implementation

Information for Standard

The written test with open answer is difficult to standardize. An evaluation of the answers by several assessors can increase the validity of the results. A marking guide should allow a wide range of possible answers without losing the professionalism of the test. 

Development

The questions should be designed in such a way that it is transparent what the scope of the answers should be. The predefined text field can provide the candidate with information about the scope of the answer. The questions should be derived from the UNITs of the competences to be measured. In the assessment, care should be taken not to assess linguistic expression.

Needs/Set-Up

Besides pen and paper, a computer can also be used to answer the questions.

Requirements for Assessors

Assessors need comprehensive skills to evaluate complex texts without bias. They must be able to identify content and professional skills despite a lack of articulation. The assessment of answers requires in-depth professional expertise.

Examples

A case study is possible in which the candidate describes how he would react in the event of an accident at work.

In Combination with

Since the test does not test practical skills, it should be combined with Role Play or Observation, for example. The test can be combined with a multiple choice test to test more factual knowledge.

References/Notes

  • CEDEFOP (2016): Europäische Leitlinien für die Validierung nicht formalen und informellen Lernens. Luxemburg: Amt für Veröffentlichungen der Europäischen Union.
  • Eck, C. et al. (2016): Assessment-Center. Entwicklung und Anwendung – mit 57 AC-Aufgaben und Checklisten zum Downloaden und Bearbeiten im Internet. 3. Aufl. Berlin / Heidelberg: Springer.
  • Obermann, C. (2018): Assessment Center. Entwicklung, Durchführung, Trends. Mit neuen originalen AC-Übungen. 6., vollständig überarb. u. erw. Aufl. Wiesbaden: Springer Fachmedien.
  • Züll, G. (2015): Offene Fragen. Hannover: Leibniz-Institut für Sozialwissenschaften. Online: https://www.gesis.org/fileadmin/upload/SDMwiki/Archiv/Offene_Fragen_Zuell_012015_1.0.pdf.
Tags: