Skip to main content

Succession management - Selection and assessment: examples

The case study institutions made a lot of discoveries when they began selecting and assessing people for their succession schemes. Here are some examples.

"At our first-ever development centre we had lots of criticism of the psychometrics, people telling us it wasn't valid. We realised afterwards that although we had sent them paperwork and explained everything, we needed to meet with them face to face to explain what would happen and reassure them."

"We’ve had situations where the wrong person is put forward for a development centre: someone who doesn’t get the process and isn’t self-aware enough to deal with the feedback. Occasionally they work out for themselves that they aren't cut out for leadership, but in one case the person got feedback, from all the sources we use, that should have made it clear to them that they weren’t a leader – but they still insisted on going on the leadership programme. Eventually their senior managers were brave enough to tell them. We learned from this that it’s helpful if the person is recommended by both their line manager and someone in HR."

"In our first round of assessment I allowed a lot of free-text responses in the questionnaires, and we got too much text in the responses, which we then couldn’t analyse. Now I limit responses to drop-down boxes and 50-word paragraphs wherever I can."

"In our first talent programme I wanted to be inclusive and introduced self-nomination. But it’s really hard to compare the evidence from a self-nomination with the evidence from a manager nomination – there’s much less information and it just isn’t comparing like with like. So we introduced the requirement to have a supporting statement from the nominee alongside the manager’s supporting statement, so that both voices can be heard in all cases."

"It’s possible to test too much. Everything you use needs to have good face validity, and be not too far away from the higher education context, so it’s better not to test for something than to use a test that doesn’t feel valid to participants. It’s also important not to have too many too many tests for the same thing. For example we found that we were over-testing for critical thinking, which was generally present in our academic cohort anyway. We have now written, trialled and launched our own case studies with support from external psychologists."

"Our scheme assumes everyone in the target grade will be assessed. This is a bit of a shock to some people: I’ve had people saying, ‘But I don't want to be a dean.’ My answer is, 'It's an organisational requirement that you’re assessed because we want the data. When you have your results I'll ask you again whether you still don’t want to be a dean, and if you don’t that’s absolutely fine.' At that point some people confirm their original view, others change their view. It’s an interesting way of addressing tendencies in some under-represented groups to rule themselves out before they start."

"The first time we ran the training and briefing of assessors, there was too much of a gap between the training and the real thing. They’d forgotten a lot of it by the time they came to do it for real. The training needs to be close enough to the experience."