Discussion
Discussion
Accurate self-assessment of personal and professional behaviours is thought to be a major component in professionalism in health care ( Rees and Knight , 2007 ). Crucial aspects of selfassessment include the ability to appropriately identify acceptable objectives or outcomes as well as the willingness to critically observe , compare , and contrast one ’ s own behavior with that of peers ( Hansford and Hattie , 1982 ; Alicke 1985 ; Lane and Gottlieb 2004 ).
Previous studies report various degrees of disagreement between objectively measured and self-perceived exam-based competence in medical students ( Blanch-Hartigan , 2011 ). This study confirms these findings in the context of written anatomy examinations administered to students in all but the last year of medical school . The correlation between actual performance and self-perceived scores was only weak-to-moderate , although statistically significant .
This is the first study to compare the selfassessment skills of students from different countries , although the number of students in each category is too small to draw any conclusions . Our results are also consistent with those of previous studies that have generally shown that female medical students tend to under-estimate their performance more than males ( Haist et . al , 2003 ), although there are a few exceptions , with some studies showing no difference ( Haq et al , 2005 ), and others , male preference ( Frischenschlager et al ., 2005 ).
The cohort with the best self-assessment skills in our study were students in Year 3 ( only one year after their final summative anatomy exam in Year 2 ). This is somewhat reassuring given the time and effort they would have invested in improving themselves over the years , as well as the feedback and mentoring provided by the faculty . It is possible that had we studied the self-assessment of clinical rather than anatomical knowledge we would have noted better self-assessment ability in Year 4 than Year 1 students .
Our findings in relation to self-assessment and perceived difficulty of questions are only partially consistent with those of Burson et al . ( 2006 ), in that no difference was found in students ' ability to predict their mark in relation to the question type they considered hardest . Also , no question type allowed students to better estimate their marks , which suggests that accuracy may depend on exam-taking skills . However , since students in this study had never before been asked to self-assess their performance , it is possible that some , if not most , might have not taken the task seriously , thus affecting the results .
Our study focused on self-assessment in the context of written anatomy exams , which our students consider to be the most difficult examinations in the first two years of medical school . It still needs to be determined whether this lack of insight also extends to practical anatomy spotter examinations in our cohort . Interestingly , however , our results mirror those of Sawdon and Finn ( 2013 ) who reported a strong inverse relationship between medical students ’ ability to self-assess their performance in a practical anatomy exam , suggesting that the same factors may operate in both exam settings .
It is possible that some of the students in this study who under-estimated their abilities lacked confidence . Although it has been shown that healthy self-confidence helps students be better judges of themselves ( Tavani and Losh , 2003 ), confidence does not equal competence .
It is not clear why medical students in this and many previous studies have poorly developed self-assessment skills . Although medical students could be considered among the most academically successful individuals , accurate self-assessment in written anatomy exams does not appear to be innate . Given the long and arduous journey to reach medical school , it is entirely possible that some students may start medical school with an inflated sense of their achievements . This may not be a strong enough base for them to develop self-assessment skills that accurately reflect their abilities ( Vallone et al , 1990 ).
This could also be because we do not consistently provide students with factual and authentic feedback . As eloquently elucidated by Carter and Dunning ( 2008 ) feedback that is probabilistic , ambiguous , biased or incomplete is prevalent but pointless . Being supportive is important , but it is not generally helpful on its own . For example , showing students how to take multiple choice exams ( how to look for logical fallacies , how to avoid “ bad questions ”) is far more useful than simply saying “ you could have done better ” without explicitly