The biggest question facing Learning Functions right now? Analytics and Improving Human Development

And it's all because the lady loves milk tray...


Analytics for Learning and Development – Hard and Soft Data

The context of accountability and the need for marketing your L&D function to aid the process of survival and recognition, drives us to focus on using the measuring stick. Value? Everyone wants to prove what they do works.

Many argue against the very principle of it, but I can’t see the issue here. If you are going to invest (develop people and spend – in some cases – a shed load of money) then you need to check, a) you are getting value for money (whatever that means to you) and secondly it’s had the impact you intended, especially if it didn’t cost much to implement.

The first question is, what’s the point of gathering data? Everyone is beating the drum but a major stumbling block is that often we are stuck with outdated systems and processes or simply it’s non-existent.

Key Purpose
The data collection, analysis and reporting process should have a pre-determined outcome/s. That usually comes in the context of answering sometimes deep and poignant questions such as:

  1. Is there a connection between when and where course content is accessed and the level of employee engagement or performance?
  2. What courses or certifications are correlated with improved performance in a specific division?
  3. Which programs resulted in the greatest measured improvements in productivity, by employee role?
  4. With the number of new employees, what does the compliance gap look like across all the regulatory/compulsory courses?
  5. What other business or HR measures change as a result of the learning interventions?


Often this can be tied to ROI validation, but it’s key to consider both, what’s easy to get and what’s useful or can answer some key questions. Those questions usually require you to engage your key stakeholders to identify what they might be. It adds to your role as consultant and business partner.

Getting the Basics right
Simple data to collect might be these items which sit on the mainstay platform of learning function accountability:

  • How many people out of the total workforce did not complete a course or take a development option?
  • What was the increase in course completions year over year?
  • How do the registration rates compare to the completion rates?
  • How many people started and completed the surveys on courses and programs?

Further data could include:

  1. Demographics
  2. Feedback
  3. Psychometric/assessment centre results
  4. Test results
  5. Skill level assessments
  6. Performance reviews
  7. Cross project work and any other secondment or development time spent somewhere else learning new skills (emphasis new skills)

Where an E-Learning system/modules are in place:

  • Time on system
  • Clicks and scrolling

The use of Soft Data
There are many many options to choose from but the simplest come from direct feedback that says what happened made a difference to performance and therefore contributed to business metric. There is absolutely nothing wrong in collecting anecdotal (recorded) evidence from individuals or managers (I call this soft data) to supplement hard data sets.

The most famous Evaluation Model
If you go back to basics to use the Kirkpatrick evaluation model, 4 stages, most people fail straight away at stage 2 with no formal assessments or testing of skills in any robust way.

Integrated project work is a half way house, which can show a practical business benefit but often this is no longer built into shorter training interventions (which have got shorter and shorter) due to not wanting to interfere too heavily in daily work routines and work completion – isn’t that the point in improving these? With little hope of proper ROI evaluation or any other type can take place simply because too much is replied upon in the ‘happy’ sheets of ‘on the day’ learning experience.

Our info-graphic here gives you a strategic look at the world of L&D and how analytics fits into the picture of both data and business results.

Human Performance Improvement

My starting point here is the most simplest, but I also prioritise under the 80/10/10 principle, 80% you can’t help because of budget restraints (unless your Google of course) but work on the top 10% and the bottom 10% because that’s where you can create the biggest impact on the organisation if you are careful what you invest in.


Important Considerations:

  • Understand what you goal is (point b), agreed with stakeholders
  • Understand your starting point (point a), agreed with individuals/managers
  • Map out a realistic, budget suitable, journey, understand the impacts of change, organisational goals and divisional cultural differences (you could use a force field analysis to explore all the issues or potential blockers)

For me HPI for individuals operates in 4 areas which come in order of impact and significance:

  1. Intelligent and Flexible Thinking (including self-confidence, attitude, self-awareness and comprehensive thought processes)
  2. Mature Skills (practical ability to do certain things very well)
  3. Deep Knowledge (both procedural, technical and environmental)
  4. Physical and Mental Well Being (that’s resilience and stability)


As top down performance improvement can often stem from role model excellence in management and leadership essentials actually taking place! This is part of the internal/external considerations of HPI in terms of work environment, organisational relationship with the individual etc (the engagement dynamics).

Improvement initiatives should balance personal career development plans (short term and long term) as well as being aligned to the organisation’s succession plans. It’s an important strategic link.

Defining Performance Levels
When deciding what different levels of performance look like, you are aiming for a clear differentiation between outstanding, typical and unacceptable.

A common definition and understanding must be established of what that means for each role or team at different levels, as benchmarking can be useful but also has its dangers. Excessive pressure, unfair and unworkable performance judgement ‘tests’, over engineering the processes and competiveness so these must be carefully tailored to suit cultural and organisational norms and drive the required behaviours leading to enhanced performance (noting the Dunning Kruger effect of illusory superiority/inferiority).


Do check out our masterclass on ROI and Business Partnering.