![]() |
|||||
![]() ![]() ![]() ![]() ![]() ![]() ![]() |
|||||
20/20 Insight GOLD - Top 5 FAQs
What was the validation research performed for "Executive Leadership" and other standard surveys contained in the 20/20 Insight GOLD Survey Library?
The answer has three parts: A. The process for "validating" a behavioural feedback survey is not the same as the process for validating a psychological instrument. Since the technology of behavioural feedback surveys is an outgrowth of the tradition of psychological testing, it's natural to confuse the two kinds of assessments. However, even though they're both often referred to as "assessments," they're radically different kinds of tools, and the procedures for validating them are different. Validity means one thing when evaluating a personality assessment or a psychological test, but it means something entirely different when evaluating a tool that gives direct feedback about behaviour - a completely different kind of assessment. Validity and reliability information has been important to determining the usefulness of tests that measure intelligence, values or psychological characteristics. These are important aspects of people that cannot be observed or measured directly, so questions are asked in order to draw inferences about these aspects. Therefore, it's important to determine whether the assessment really measures what it says it measures. In other words, are the constructs valid? Are the constructs that are calculated based on the answers highly correlated to phenomena in the real world? Do they make valid inferences based on the questions in the assessment? Will we get the same results, measurement after measurement? Many of the early multi-source feedback instruments evolved from these traditions. Some even focused on traits, values, characteristics and other aspects which cannot be measured through direct questioning or direct observation. Their purpose was to ask carefully researched questions from which one could draw inferences about these important qualities. The constructs created from the answers were the main purpose and product of these assessments. Therefore, research was needed to verify (validate) that the inferences were not just theoretical notions, but actually coincided with what was happening in the real world. 20/20 Insight GOLD is a wholly different technology. Quite simply, it's a computer-assisted mechanism for gathering and giving feedback about specific observable behaviours. Unlike psychological instruments, the survey items (observable behaviours) themselves are the primary focus of the feedback - not constructs inferred from the item responses. In 360-degree feedback, many people are asked to report what they observed, and their aggregate response is reported. These responses themselves, not dimensions inferred from the responses, are the purpose and product of the assessment. In fact, the 20/20 Insight GOLD surveys don't create or report any constructs. The items are clustered into categories only to make it easier to relate and analyse the feedback. Therefore, there's no need to do research to verify the validity of inferences or constructs, because no inferences or constructs are produced. Unlike personality or trait assessments, a "universal" validation which applies to all organisations is not appropriate. That is not to say that validity isn't important. With 20/20 Insight GOLD, another kind of validity is important. It's important to establish whether the surveys to be used within the organisation actually address the most important workplace behaviours of that site. When the 20/20 Insight GOLD surveys are used, we encourage organisations to do internal competency research, then deselect, add and modify items in the standard survey categories to align with local practices. These customised surveys must be verified locally, because every organisation has a different business, culture and priorities. Validation can be accomplished by using importance and frequency surveys, expert panel reviews and pilot assessment projects. Reliability also doesn't have the same meaning with this kind of feedback tool. Reliability refers to the consistency of measurements. In fact, you wouldn't expect or want to get the same results with each subsequent measurement, because the assessment focuses on observable skills, competencies and abilities, which are expected to improve over time. Once again, with behavioural feedback surveys, the important validity issues are: This view of validity also applies to the organisational surveys processed by 20/20. Survey items are organised into categories because of their topical relationship, not because they are used to create constructs - none are created. In other words, the answers to these questions are what we seek. The categories simply help us focus in on areas of interest. The survey doesn't "measure" the category. It simply asks questions that leaders want to ask, and collates the replies as feedback information. The items are valid if they are, in fact, exactly the questions leadership needs to ask to find out what they need to know to improve the organisation. Because a psychological instrument is intended to be used as is with all possible subjects, its validation research is normally published in psychological testing journals. This is not the case with the validation research performed for behavioural feedback surveys. B. "Executive Leadership" and other standard surveys contained in the 20/20 Insight GOLD Survey Library were researched and developed using standard behavioural feedback (competency) research and development practices. The 20/20 Insight GOLD Survey Library features no psychological instruments. All the individual multi-source feedback surveys are behavioural feedback surveys. Optimally, standardised behavioural feedback surveys are the subject of not one, but two cycles of validation research, both of which are dramatically different from procedures used to validate psychological instruments: (1) author/publisher validation and (2) local validation. The surveys featured in the Survey Library have been developed by experts who have extensive experience and learning in their specialised areas. Each used a standard method for researching competencies. The first step was a comprehensive review of the literature to determine what current research and authoritative writing has defined as highly desired competencies. Based on this research, the developer then created draft competency lists that model the target area of performance. These lists were then screened for redundancy, importance and validity, based on the survey authors' experience in organisations. The lists were then subsequently validated by the developers' client organisations, a process in which the models were examined by organisation subject matter experts, used in training and development programs, checked for improvements in performance, and finally referenced in quantitative and qualitative feedback - an effort that often spanned several years. During that process, the survey items were also circulated to a variety of other experienced professionals for evaluation and suggestion. The final versions of the surveys were studied and revised as necessary by the publisher for acceptance in the Survey Library. The surveys were then offered to organisations during beta testing and later for general use in 360 degree feedback programs. Feedback from these groups was studied by the publisher and appropriate revisions were made. In the case of 20/20 Insight GOLD, the surveys included in the Survey Library have been systematically reviewed and used by hundreds of organisations. The standard behavioural feedback surveys in the Survey Library were validated for developmental use, not for linkage to personnel or compensation decisions. C. All behavioural feedback surveys must be customised and validated locally. Different industries involve dramatically different business practices; what is understood as desired performance within organisations varies widely, as does the language that describes it. For example, in the U.S. Army, the Federal Reserve Bank and the Baptist church, leadership is defined and practiced differently. Yet, each desires to use 360-degree feedback to hold a mirror to its leaders' performance. While there are many common behaviours, many are different. Therefore, the second phase of validation must inevitably involve local validation. The author's research does not - and cannot - establish universal validity. While a one-size-fits-all instrument was possible and desirable in the field of psychological testing, it is neither possible nor desirable for feedback about specific workplace behaviour. A published standard competency list is intended to serve as an optimum start-point for organisational research, customisation and validation locally. This process requires people within the organisation (or a consultant) who are familiar with the area of competence to review the standard survey. Items inappropriate to the local workplace may be deleted and additional items added. Wording can be modified to use business-specific terms and align the items with the organisational culture. The new behaviour list is then reviewed for size, frequency and importance by local subject matter experts and stakeholders; and additional revisions are made, if appropriate. A pilot assessment with a sampling of subjects tests effectiveness of the survey, and feedback from the process is studied for possible refinements. The final product is a locally validated list of observable behaviours. For behavioural feedback surveys, validity equates to relevance to workplace performance. Does it focus on the most important workplace behaviours in this particular organisation? In the end, the most valid survey process may not employ an entire competency list, but focus on portions of it that are deemed by executives to have the highest potential for individual and organisational improvement. 1. For a summary of the research conducted to develop the original "Team Player" survey, see the monograph by Dennis E. Coates, Ph.D., "Report of Field Test Results for 20/20® Insight." Newport News: Performance Support Systems, 1994.
|
|||||
20/20 Insight GOLD SOFTWARE
|
|||||
© Copyright 2006-2013 – Matrix Vision Pty Limited. All rights reserved
. |
|||||