Menu

Integrity Testing

Integrity
Testing

Integrity testing as a pre-employment screen is cited by some (e.g., Berry, Sackett & Wiemann, 2007; Sackett, Burris & Callahan, 1989) as being an early attempt to detect dishonesty among applicants without having to resort to polygraph testing. Integrity testing quickly expanded and received even more attention when the Employee Polygraph Protection Act of 1988 (EPPA) restricted most employers from using polygraph testing as a selection tool; these paper-pencil tools were sought as means to usurp this law. Within the last two decades, integrity test use has continually grown to become one of the larger selection tool domains, and it is no longer viewed by experts as merely a legal polygraph proxy. In fact, integrity testing as shown to be one of the most valid and least impactful of selection tools currently available (Berry et al, 2007; Ones. Chockalingam, Viswesvaran & Schmidt, 1993; Sackett & Wanek, 1996; Schmidt & Hunter, 1998).

Current Models of Integrity

Two categories of integrity exams exist that have been proposed and are generally accepted by experts; those categories are overt integrity tests and personality-oriented tests (Sackett & Wanek, 1996). Overt tests conceptualize integrity tests in terms of direct questions regarding a test taker’s attitudes or past behaviors. Commonly, questions ask how often an individual has engaged in theft behaviors, drug usage behaviors, criminal behaviors or other wrongdoings. Additional questions directly inquire about beliefs on these same topic areas, such as punitiveness, endorsement of rationalizations for behaviors, and remorse for past actions (Berry, et al., 2007).

Personality-oriented exams are more covert as they assess personality constructs believed to be involved in integrity (e.g., socialization, positive outlook, orderliness/diligence). These tests may ask questions to assess individuals’ thrill-seeking behaviors, social conformity, attitudes towards authority, aggression, conscientiousness and dependability. Questions are phrased similar to those on personality exams; a test taker strongly agrees, agrees, is neutral, disagrees, or strongly disagrees to/with statements measuring a specific domain or construct (Berry, et al., 2007).

Integrity & Job-relevant Criteria

Integrity is an action, or behavior. As such, integrity measures should “measure” an individual’s propensity to behave in certain ways. Both models, or categories, of integrity exams are developed to measure counterproductive workplace behaviors (CWB). These behaviors, if commonly performed by an individual, would suggest that a person lacks high levels of integrity. These behaviors vary, but some examples include the following: disciplinary problems, tardiness, absenteeism, turnover, violence, substance abuse, property damage, organizational rule breaking, and theft. All of these behaviors are harmful to organizations and agencies. They directly affect the achievement of individual job tasks and/or directly reduce an agency’s bottom line. Thus, these behaviors are considered both by experts and the courts as job-relevant criteria; prediction of such behaviors constitutes a bona-fide business necessity (see Legal).

Counterproductive Workplace Behaviors

Generally, the research has found that both types of integrity exams (overt and personality-type) predict CWB equally well (Berry, et al., 2007). Research on Integrity’s prediction of CWB has generally shown moderate relationships. In a meta-analysis, Ones, et al., (1993) found that an overt test predicted a composite variable of CWBs consisting of disciplinary problems, tardiness, absenteeism, turnover, violence, substance abuse, property damage and organizational rule breaking, at ρ =.39 (.27 uncorrected). Personality-oriented measures predicted CWBs slightly less, ρ =.29 (.20 uncorrected).

Theft Behaviors

One counterproductive workplace behavior that is of special interest to many businesses and organizations is theft behaviors. The meta-analysis of Ones, et al., (1993) found a relationship between integrity exams and theft behaviors. Overt exams predicted external measure of actual theft and dismissal for theft, ρ = .13 (.09 uncorrected). An even stronger relationship was found with overt integrity exams predicting admissions of theft and self-reports of dismissals for theft, ρ = .33 (.30 uncorrected). It is important to note that the latter self-report value is likely to be the most accurate measure of the true validity in integrity exams predicting theft behaviors due to the fact that many thefts go unreported. This fact will tend to attenuate the true relationship; admission increases the reporting rate and thus will be a more accurate reflection of the integrity-theft prediction.

Overall Job-Performance

Apart from predicting various CWBs, integrity exams are among the best predictors of overall job performance. Ones, et al. (1993) found a meta-analytic prediction of job performance by overt and personality-oriented integrity exams, ρ = .41 (.23 uncorrected). Using this value, Schmidt and Hunter (1998) found that integrity exams add more incremental validity to cognitive ability in predicting job performance than any other personnel selection tool. Combined they produce a large validity estimate (ρ = .65), suggesting that 42 percent of job performance variance is explained by this composite (an extremely large value for selection).

Integrity Tests | IO Solutions | Public Safety Testing
Faking on Integrity Tests

One issue relevant to integrity tests is the issue of candidate faking. This topic addresses the concern that individuals will be able to respond to questions in either socially desirable ways – i.e., not respond to their personal beliefs or that individuals will not admit to actual behaviors in the overt portion of the exam. In either case, such response patterns could lead these individuals to have heightened or inflated exam scores. This is all the more troubling since such responding seems to be unethical—the very quality the test aims to measure. Thus, individuals who should be screened out would instead receive inflated exam scores.

Research does suggest that faking is possible (Ellingson et al, 1999). However, the majority belief is that while people can fake when instructed to do so, individuals do not fake in real world situations (Hough et al., 1990; Morgeson et al., 2007; Ones & Viswesveran, 1998; Ones, Viswesveran & Reiss, 1996). This argument is supported by the fact that we are quite able to detect faking and socially desirable responses with various measures (Morgeson et al., 2007; Ones & Viswesveran, 1998); yet, such measures have no impact upon validity estimates. Thus, faking and measures of social desirability have virtually no impact upon measures of integrity (Hough et al., 1990; Morgeson et al., 2007; Ones & Viswesveran, 1998; Ones, Viswesveran & Reiss, 1996).

Adverse Impact

An important issue in all selection measures is that of adverse impact, and integrity is no exception. However, unlike other selection tools, integrity research is very promising in terms of its adverse impact on protected classes (for a description of this, see AI paper). Often the more valid tools (i.e., tools with the highest predictive relationship with on-the-job performance), such as cognitive ability measures, produce the highest adverse impact values. Integrity exams seem to be an exception to this trend. As the previous sections have indicated, integrity exams have very strong predictive relationships with criterion of interest, especially with performance. Still, research shows minimal to no difference in performance on integrity exams across protected groups, meaning that integrity exams do not adversely affect these protected groups (Ones, Viswesvaran, & Schmidt, 1996). Virtually the only “sub-group” differences appear between men and women, with women scoring .11 to .27 standard deviations higher than men; this will likely not violate the 4/5th rule of thumb (Ones, et al., 1996).

Integrity Exams in Public Safety Selection

Few integrity exams published have been developed or validated specifically for use in the public safety sector. IO Solutions has researched and published on using integrity exams in the public safety sector. One study, Tawney (2008), found that agencies could use integrity exams as an early stage pre-employment selection tool as a means to mitigate failure rates in later, more expensive processes (e.g., polygraphs, background checks, and psychological evaluations). In total, a savings of up to 50 percent of the original cost of an entry-level selection process could be saved by using integrity exams (Tawney, 2009).

The exam used in Tawney’s 2009 study was developed by I/O Solutions specifically for use in the public safety-specific setting. Research on this exam has shown that it is valid for use in this industry with high correlations (corrected for criteria unreliability) with business-relevant criteria (Hard Drugs, Theft, Alcohol, and DUIs, r(192) = -.21, -.26, -.24, -.29), while showing no adverse impact against protected classes (Tawney, 2009).

Questions? Contact Us!

 

Summary
product image
Aggregate Rating
5 based on 6 votes
Brand Name
Industrial Organizational Solutions
Product Name
Integrity Tests for Employment