A litmus test for leaders

360-degree feedback reports can be looked at differently but action plan critical

For any organization, increasing leadership skills is a priority. One of the ways to do this is to conduct a 360-degree feedback assessment, which serves as a sort of litmus test for managers.

There are several steps that can be taken to ensure the feedback process goes smoothly and actually has an impact. Before the feedback process, the organization should have:

• determined the objective of the assessment

• created the topic areas and questions that support this focus on development

• deployed the assessment online and collected feedback from each participant: the person being rated, her manager, her direct reports and several colleagues

• created an individual report for each participant.

Once the 360-degree feedback process is completed, it’s time to communicate the assessment results to the manager. If the report is handed over without a discussion, the numbers in the report could be overwhelming. A manager might also try to determine who said what and focus only on the weaknesses brought out in the report instead of looking at balanced feedback.

A supervisor’s job is to help a manager understand what the report implies about his skills. This feedback should include a balanced approach, focusing both on the positive and negative feedback. Beyond that, the supervisor can help him identify the development opportunities that could provide the best payoff when comparing efforts to results.

So how do you approach this conversation? How do you work with each leader to understand the messages within the scores, as well as help determine priorities and recommend changes?

Although reports generated from different software vendors can look very different, there is a commonality of output. Below are four typical reports, with a short explanation of what message can be gained from each one.

Report 1: The summary view

Organized by topics (or competencies), this report provides a summary of scores, arranged by the different relationships. These scores are the averages of all the questions assigned to the specific topic area. Reviewing this report provides indicators of what to look for throughout the rest of the report.

It generally includes an overview of the self scores, as well as scores by each relationship. Look at the self scores and how they compare to other relationships scores (such as manager or colleagues). This can provide insight into perception issues between raters and the participant — the larger the gap, the greater the perception issue.

Is the participant over- or underrating himself? Is there consistency between the different rater groups? Are there any outliers that need attention?

In going through this report, determine three or four areas to highlight and ask the participant for his feedback on what he wants to highlight as well.

Report 2: Question scores by competencies

This report breaks down each competency or topic by its questions. Review this report much like report one, but also:

• approach this report with more emphasis on specific areas of each competency

• look for the highest and lowest rated items overall, highest and lowest by rater category and consistency by category

• identify specific questions that may have raised or lowered scores within a given competency.

Report 3: The gap report

This report generally shows each question text with the associated topic area, the self-score, the roll-up of all the raters’ scores and the difference between the self and the others’ scores (the gap). The larger the gap, the more inconsistent the view of a behaviour between the manager and the other raters.

In looking at the gap column, a positive gap reveals a participant has undervalued himself while a negative gap reveals areas where he has overrated himself. It’s important to look for gaps of more than one (either positive or negative) and discuss why there might be a perception difference.

Report 4: Comments

Some debriefers prefer reviewing comments first so they have some kind of information on which to base the rest of the scores. Others wait until all the quantitative data has been reviewed. At any point in this debriefing process, if the participant looks at a score and says, “I have no idea what people are talking about, this makes no sense to me,” look at the comments to see if there is any qualitative support for the quantitative scores.

Balanced feedback

When reviewing a leader’s report, look for balanced feedback. Balance is imperative — it is just as important to identify the things a person does well as it is to point out areas for improvement. Also, a lower score is not necessarily a bad score. If someone has a lower score for the question “Makes clear and convincing presentations?” but he is not asked to make presentations as part of his job, this area does not necessarily need to be worked on.

It is important to look for trends and themes, within a topic area and among topic areas. For example, there are times the topic areas can be seen as one of two types of feedback — one is what is accomplished (topics could be results-orientation, problem-solving or decision-making) and the other can be how things are accomplished (topics could be communication, teamwork or developing people).

Sometimes participants receive high scores in one area but not both. For example, they get high scores in what they accomplish but lower scores in how they get things done (such as their style). Other participants can receive high scores for their style but lower scores in what they achieve.

Look at the relativity of the scores. Each person has higher and lower scores on individual questions. What may be a high for one person may be a low score for another. Check in with a participant on whether or not he agrees his top scores represent areas he does well and lower scores represent areas for improvement.

Next steps

If the assessment process is stopped after the participant sees his feedback, the organization has missed a strategic part of the process — how to help managers plan to make changes where necessary, and support their areas of strength.

Following the debrief meeting, a participant typically thinks through the meaning of the feedback, prioritizes his thoughts and assembles an action plan that includes three to five areas on which to focus over the coming months. Some leaders wish to take a strength and make it stronger while some want to take issues that are problem areas and improve one or two of them. There is no right answer — the action plan may combine some or both of these strategies. Ongoing discussions (on a monthly or quarterly basis) keep the action plan relevant and on track.

The next time the participant is rated, everyone wants to see measurable change that can be attributed back to the assessment system. Once the action plan is in place, followup assessments can measure the change in participant and rater perceptions. This concept refers to the continuous improvement process (see sidebar on page 14 for a visual example).

Making 360-degree assessments a process in an organization can help a manager stay focused on the development of her staff, enable her to measure the gains in her leadership skills and improve overall organizational performance.

Marcie Levine is CEO of SurveyConnect, a Boulder, Colo.-based provider of intuitive assessment applications, survey software and survey services. She can be reached at [email protected] or (303) 449-2969 ext. 223.

Latest stories