itSMFI 2016 Forum Focus - September Forum Focus ITSMFI Sept 2016 | Page 19
By Paul Collins, SkillsTx.com
Sounds like a simple question with an obvious answer: “Ensure everyone completes the same
surveys, use the same calculations for each result, and surely accuracy and objectivity will follow”
Unfortunately, the answer is not that obvious, or that simple. Researchers and academics under-
stand it takes a lot more to be able to fully rely on their results. They therefore invest a lot of time
and effort in creating a ‘research instrument’.
However, having said that, there are practical steps that, if followed, will
improve the quality of the outcomes.
Step One, the Survey:
Structure the questions very carefully.
Take them into account when authoring the surveys.
Ensure you understand the target participants and as far as possible,
make the questions relevant to the audience.
For example, you might use acronyms and more technical
language if the surveys are appropriate for an expert group. If you
construct long and descriptive questions for an expert audience, they
will be frustrated by reading unnecessary content.
However, the opposite would be the case for a non-expert group,
where an educational approach would be more suitable.
Remove ambiguity by reducing the use of words such as ‘and’.
Ensure respondents have been suitably briefed regarding the
objectives of the assessment and where possible, include guidance
and instructions within the surveys.
Include answer options with an ‘opt-out’ and graduated
options such as ‘I do not understand the question’ and ‘I fully
perform this task’ or ‘I partially perform this task’.
Opt-out answers can be used for quality control to improve question
wordi ng.
Build-in question branching where appropriate based on the
responses to previous questions. This improves efficiency and
reduces unnecessary effort and time to complete.
Provide accurate time estimates and progress monitoring that will
allow the participant to schedule the surveys within their working
day.
Finally, consider the education value of the questions. It is often
possible to include education content such as examples and
explanations of technical terminology.
Step Two, targeting:
Within every organization there are a broad range of perspectives.
To name a few, these can be related to aspects such as:
Hierarchy
Location
Experience and/or knowledge
Length of service
19 itSMFI Forum Focus—September 2016
Step Three, validate and/or triangulate:
While following the two previous steps will certainly help generate an accurate
result, variances will occur and you need to cater for the real possibility that
some surveys will be answered without sufficient due attention or time.
Therefore, a detailed analysis of the recorded responses using an
outlier approach is a useful method for identifying the
self-assessments that should be considered for validation.
The use of triangulation can also identify unreliable results.
Triangulation includes the equivalent of a 360° survey that enables you
to compare the self-assessment against 360° responses, and then
analysing the results for dissonance. If significant variance is identified,
then those self-assessments should be considered for a detailed
validation.
Validation can include:
Searching for evidence or observation and then adjusting the
result accordingly, whilst retaining a copy of the original responses
for future analysis and decision making.
Following these three steps, and retaining all recorded responses, will provide a
much richer dataset with varying levels of accuracy that can be used for more
granular and informed decision making.