Discovery Report Assessment Validation Study Results
(DISC Model of Human Behavior)
You may be asking some really good questions as you consider our program and the use of DISC personality profiles and assessments. You may wonder ...
- Are your Discovery Reports and the DISC personality assessment valid?
- How do I know that they are reliable?
- How does your DISC assessment compare with others that I have heard of?
We have data that indicates that our assessment is highly reliable
and highly valid both in terms of statistical analysis and in terms of
actual feedback from people who have experienced Discovery Reports first
hand.
Statistical Validation Study:
A validation study was performed by psychometric statistician Thomas
G. Snider-Lotz Ph.D. Please scroll down this page to view the summary
of the results of the study.
DISC Assessment Validation Study Summary
(calculations performed by Thomas G. Snider-Lotz Ph. D., Psychometric
Statistician)
A validation study was performed on the Personality Insights
style assessment instrument in order to determine its psychometric characteristics.
The instrument was administered to 500 people in order to have a good
statistical sample.
Reliability Data:
Reliability is simply a measure of how consistent the results
are. When a person takes an assessment that is reliable, the results
are consistently the same or very close in outcome. An unreliable instrument
would yield results that are widely scattered. If the instrument is reliable,
we can be confident that their actual score is close to their "true
score."
Reliability is measured on a scale of 0-1 where 0 represents
completely unreliable and 1 represents completely reliable. For purposes
of this analysis, a reliability score in the upper .70's is considered
to be good. As you will see in the graphs below, reliability results
ranged from 0.84 to 0.88 - indicating a reliable assessment.
Validity Data:
Validity is a measure of whether an instrument is appropriate
for the use to which it is put. The purpose of the Discovery Report assessment
is to classify people in terms of "DISC" behavioral patterns.
The terminology for this type of validation study is called "construct
validity." Construct validity can be measured by input from experts
in the field of "DISC" behavioral styles. Another measure of
construct validity is the degree to which the assessment results agree
with outcomes generated by other widely accepted instruments. Both measures
are met because 1) The Discovery Report assessment instrument was formulated
by experts with many years of experience with the DISC model of human
behavior and 2) The Discovery Report assessment instrument was shown
to have have relatively high correlation with a widely accepted instrument.
A high correlation indicates high construct validity.
Please refer to the graph below to view the validity results.
Standard Errors of Measure:
Standard errors of measure (SEM) is an indication of the
consistency of an instrument based on its own point scale. Assessment
results are plotted on a scale from 0-24. The SEM of the Discovery Report
assessment tool ranged from 1.6 to 2.0, indicating that a person who
took an assessment over and over is not expected to get results that
vary in any one category by more than 2 points. This is a further indication
of construct validity.
Actual User Feedback:
After thousands and thousands of assessments given on-line and at live
seminars, we can report that user feedback is that the reports are useful for the intended purpose (personal growth and people-skill awareness). Such empirical data is a consideration for the validity of the assessment tool. Well over 90% of the participants offer
very positive feedback. The rare cases where a participant has disagreed
with the assessment results are less than one tenth of one percent.
We are consistently told that our "Discovery Reports" are
helpful and accurate.
Note from the staff at DiscoveryReport.com:
We trust that the validation study information is helpful
to you as you consider the Discovery Report assessment process. We try
to emphasize that assessment results are simply a tool to offer
feedback about a person's behavioral style. No one can predict
future behavior or indicate how a person will respond in a given situation.
We simply endeavor to help people see that there are basic patterns
in human behavior. Those patterns are observable and correlate
with how people TEND TO act, react and interact.
We know that there is a degree of subjectivity that is
inherent in any behavioral instrument. We know that the degree of subjectivity
increases with younger participants, but we have worked diligently to
minimize subjectivity and to use objective selection criteria - particularly
in the case where we use pictures and stories as part of the assessment
process (BOTS for children).
We readily acknowledge that there are many good instruments
available. We believe that our assessment tools are simple to use and
that they are based on an established and easy-to-understand model of
human behavior. We believe that our Discovery Reports are both practical
and unique. The child and teen versions of Discovery Reports are distinctive
in their inclusion of the parents and teachers in the process.
It is our sincere desire to be of help to those who use
our resources. If you have any questions or suggestions regarding our
approach or our materials, we invite you to contact us.
Questions? Contact us here