How to recognize, avoid errors in the job evaluation rating process

By Christopher Banks
|Canadian HR Reporter|Last Updated: 02/26/2003

Whether to determine pay or to set goals through performance management, job evaluations are the foundation for decision-making. Unfortunately, the process has potential for errors.

Job evaluation deals with the assessment of the level and type of skills, knowledge, responsibilities and working conditions necessary for an individual to carry out the duties of a position. It’s the formal analysis of positions within an organization and the subsequent relating of each position to others in a systematic way.

Although there are many methods of job evaluation, such as ranking, job classification and the point method, the most common approach is the factor comparison method. With this method, a number of compensable factors are identified, such as education or problem-solving; each job is then rated according to how much of that compensable factor is required.

The factor method has some compelling advantages. It can be applied to a diverse array of jobs, including newly created jobs. It can be tailored to the uniqueness of different organizations. It uses the same ‘ruler’ to rate all jobs. It allows employees and managers to have input into the process. And the process is transparent.

Individuals who rate jobs (raters) determine what level of each compensable factor is required for each position. There are several methods of gathering job evaluation information, such as direct observation, employee and manager interviews and questionnaires. However, decisions on job evaluation ratings are most often made by a group of individuals, often called rating committees.

Since human judgment is involved in job evaluation in many of the same ways as it is involved in performance appraisal, many of the same errors that are seen in the appraisal of performance are seen in the appraisal of jobs. Errors can occur in the rating of positions, both in the factor comparison and the point methods.

Halo-horn effect:

This occurs when the rating of a position on one compensable factor unduly influences the rating of other compensable factors for the same position. This can be observed as a higher than warranted rating of a particular factor (halo effect) or a lower rating of a particular factor (horn effect).

For example, a technician position requires an advanced degree in chemical engineering but does not require a high level of communication skills. But raters assume communications skills are important simply because a degree is required.

Restriction-of-range errors:

This occurs when raters fail to use the entire range of levels on a compensable factor. Raters may have a tendency to rate compensable factors all the same, for example giving problem-solving ability nearly the same weighting for widely different positions.

A sign to watch for is when the ratings for a specific factor for 100 diverse positions in an organization all fall within two levels on a five-level scale.

Similarity effect:

This occurs when a rater more favourably rates jobs that are similar to her own.

For example, a rater who holds the position of programmer/analyst may rate IT positions higher than administrative positions, even though there is no defensible reason to do so.

Sequencing error:

A sequencing error can occur when factors that were recently evaluated unduly influence the rating of subsequent factors.

For example, a custodial position is rated low on educational requirements. The position is rated low on the next factor simply because the previous rating was low.

Stereotyping:

Stereotyping occurs when a rater generalizes the attributes of a type of job.

For example, a rater has a preconceived idea of the job requirements of a clerical position and rates the position accordingly even though the position involves duties not traditionally associated with clerical work.

Higher value-higher rating:

This occurs when the existing job hierarchy unduly influences the rating of jobs. This error would tend to create a new job hierarchy that mirrors the existing one.

Getting stuck:

This happens when a rater does not acknowledge or correctly perceive changes in a job’s duties or required proficiency in a factor over time.

Favouritism:

This occurs when a rater evaluates positions held by friends and acquaintances more highly than others.

Contrast error:

This occurs when raters only compare positions to one another rather than to the stated definitions in the compensable factor.

For example, a rating committee rates position X higher than position Y on the problem-solving factor simply because it requires a different kind of problem-solving, even though the level of problem-solving required of both positions is equivalent.

Frame-of-reference error:

This occurs when a rater compares a position to her own personal standards and expectations for that job, rather than to the actual standards and expectations of the job.

For example, a rater believes that positions in unit X require a high level of functioning and rates those positions accordingly even though the belief is based on an erroneous assumption about the working conditions in unit X.

First-impression error:

This occurs when raters are unduly influenced by the rating assigned on the first or first few compensable factors for a position. Raters can have an initial high or low assessment of the position’s requirements in one factor, and then ignore or consciously distort subsequent information so as to support the initial assessment. This can occur even though factors are designed to be rated entirely independent of one another.

Incumbent-position error:

Many job evaluation rating errors occur partly because, with the increasing complexity of jobs, incumbents are defining their own jobs to a growing degree. The boundaries and duties of a position may expand, contract or change depending on the person holding the job; therefore, it is difficult to separate the incumbent and the position during the job evaluation rating process.

How to guard against errors

Whether or not raters are conscious of the errors they can and do make, there are several methods and strategies that counteract the potential inaccuracies of job evaluation.

•Raters should be told of common rating errors and given an opportunity to practice rating in the job evaluation process.

•Jobs should be rated in different orders on different compensable factors. This will prevent rating committees from developing patterns of rating that may be incorrect and based purely on the order jobs are rated.

•A rating committee should establish a common frame of reference before rating begins. This could involve developing written guidelines on what each level of each compensable factor means, and what criteria would have to be met for a position to be rated on a given level.

•Information gathered on a position should encompass activities over a one-year period so that the full range of duties is taken into account.

•The rationale for the rating given to a position should be consistent with the actual rating given.

•Raters should try to identify their own biases and preconceptions and develop plans to overcome them.

•Multiple raters should be used to rate all jobs. A committee using consensus-based decision-making will make fewer errors than raters working alone.

•A committee made up of a cross section of managers and employees from different locations, functional areas and organizational levels will counteract many potential errors such as similarity effect and favouritism.

•Continually reviewing the rating results as the project progresses will ensure any errors, including systematic errors, are corrected early. Ratings may have to be revisited as new information is gathered and new ideas and guidelines are developed.

•When reviewing the results, the rating information should be presented in different ways as the project progresses. For example, examining rating results by position and by compensable factor provides two different views of the same information. Also, presenting information graphically can help raters more easily interpret profuse amounts of information. Allowing raters to see data in different ways can prevent errors such as stereotyping and first impression.

•Employees should be given the opportunity to provide additional information about their positions. They should also be allowed to request that the rating of positions be re-examined.

For a job evaluation system to be fair and consistent, and for the results to be accepted by its users, rating committees and raters must avoid errors. The elimination of systematic biases that exist in many organizations, such as those based on gender, location and type of work, is one of the purposes of job evaluation processes; therefore, it is important that HR managers be able to identify potential rating errors.

A job evaluation system designed to eliminate existing systematic errors in the relative worth of positions that is itself susceptible to systematic errors is a major problem.

Christopher Banks is a partner in Rochon Associated Human Resource Management Consulting Inc. in Saskatoon. He may be reached at cbanks@innovationplace.com.

Add Comment

  • *
  • *
  • *
  • *