Validation Study by
Thomas G. Snider-Lotz Ph. D., Psychometric Statistician
Summary of Results:
A validation study was performed on the Personality
Insights style assessment instrument in order to determine
its psychometric characteristics. The instrument was administered
to 500 people in order to have a good statistical sample.
Reliability Data:
Reliability is simply a measure of how consistent
the results are. When a person takes an assessment that is
reliable, the results are consistently the same or very close
in outcome. An unreliable instrument would yield results that
are widely scattered. If the instrument is reliable, we can
be confident that their actual score is close to their "true
score."
Reliability is measured on a scale of 0-1 where
0 represents completely unreliable and 1 represents completely
reliable. For purposes of this analysis, a reliability score
in the upper .70's is considered to be good. As you will see
in the graphs below, reliability
results ranged from 0.84 to 0.88 - indicating a reliable
assessment.
Validity Data:
Validity is a measure of whether an instrument
is appropriate for the use to which it is put. The purpose
of the Discovery Report assessment is to classify people in
terms of "DISC" behavioral patterns. The
terminology for this type of validation study is called "construct
validity." Construct validity can be measured by input from
experts in the field of "DISC" behavioral styles. Another measure
of construct validity is the degree to which the assessment
results agree with outcomes generated by other widely accepted
instruments. Both measures are met because 1) The Discovery
Report assessment instrument was formulated by experts with
many years of experience with the DISC model of human behavior
and 2) The Discovery Report assessment instrument was shown
to have have relatively high correlation with a widely accepted
instrument. A high correlation indicates high construct
validity. Please
refer to the graph below to view the validity results.
Standard Errors of Measure:
Standard errors of measure (SEM) is an indication
of the consistency of an instrument based on its own point
scale. Assessment results are plotted on a scale from 0-24.
The SEM of the Discovery Report assessment tool ranged from
1.6 to 2.0, indicating that a person who took an assessment
over and over is not expected to get results that vary in any
one category by more than 2 points. This is a further indication
of construct validity.
Note from the staff
at DiscoveryReport.com:
We trust that the validation study information
is helpful to you as you consider the Discovery Report assessment
process. We try to emphasize that assessment results are
simply a tool to offer
feedback about a person's behavioral style. No one
can predict future behavior or indicate how a person will respond
in a given situation. We simply endeavor to help people see
that there
are basic patterns in human behavior.
Those patterns are observable and correlate with how people
TEND TO act, react and interact.
We know that there is a degree of subjectivity
that is inherent in any behavioral instrument. We know
that the degree of subjectivity increases with younger participants,
but we have worked diligently to minimize subjectivity and
to use objective selection criteria - particularly in
the case where we use pictures and stories as part of the assessment
process (BOTS for children).
We readily acknowledge that
there are many good instruments available. We believe that
our assessment tools are simple to use and that they are based
on an established and easy-to-understand model of human behavior.
We believe that our Discovery Reports are both practical and
unique. The child and teen versions of Discovery Reports are
distinctive in their inclusion of the
parents and teachers in the process.
It is our sincere desire to be of help to those
who use our resources. If you have any questions or suggestions
regarding our approach or our materials, we invite you to contact
us.
Email Contact: support@discoveryreport.com