Skip to content

Evidence-Based HR: What It Is, and Why We Should Care

I’ve been wanting to write about evidence-based HR for a while, in part because back in 2013 I wrote a blog post critiquing the idea (brattily titled: “Evidence-Based HR: Are We Kidding Ourselves?”) and have since completely changed my opinion. This time, I’ve left it to the experts and invited the wonderful Natasha Ouslis to set me straight on what evidence-based practice in HR is and isn’t, and why we should care.

Natasha is a scientist and a practitioner of behavioral science, specifically organizational psychology. She comes from a background in cognitive psychology (Honours BSc from University of Toronto), where she researched how people visually perceive the world around them and how they make decisions about that information.

She works as a researcher with the TeamWork Lab conducting team training and collecting data on how engineering project teams operate, and as a consultant with the team at BEworks to solve behavior change challenges at all stages of the employee and customer lifecycle.

Natasha, let’s start with the basics. What is evidence based practice as it relates to HR and organizations?

Evidence-based practice is a really broad term. One definition is, “using high-quality evidence to make decisions in organizations in the face of organizational challenges.” As Rob Briner says, evidence can come in many forms, including personal experience, company data, and scientific studies. It’s really important to consider all sources of evidence; however, that doesn’t mean we should consider all sources equally. This is because some evidence types are less reliable than others are. For example, we know that our memories of our personal experience don’t represent reality very well; if something comes to mind more easily, we remember it as more common than something that is harder to remember (Tversky & Kahneman, 1973).

How does this relate to HR and organizations? Consider an internal promotion. If a manager is considering someone for promotion, and a really memorable mistake they made comes to mind, the manager might think that person makes mistakes all the time. It might be harder to remember all the times they didn’t screw up. So, our personal experience with that individual is one source of evidence, but not the best source.

Next, we have company data. Continuing the promotion example, company data can be better than one person’s experiences, because we might have attendance metrics, multiple sources of feedback, sales data, or others. With this data, and people who know how to analyze it, we can augment our personal experience with some harder metrics and question assumptions that our (flawed) perceptions might have. From this company data, we might learn that people rated favourably by their peers are more likely to succeed after being promoted. That’s good to know, and we can try to use that in our company.

But company data tells us what is already happening, as it looks backwards, but it’s not very good at telling us what to do moving forwards. This is where behavioral science comes in. By looking to the research, we can build on and test out solutions that other people have painstakingly evaluated already. And that process of rigorous, controlled evaluation is documented so anyone can reproduce the steps.

Okay, and why should HR practitioners be interested in the application of behavioural science to our work?

You might have noticed two names in parentheses in my answer above. Those two people, Amos Tversky and Daniel Kahneman, are two scientists that HR practitioners should pay attention to. They have already been applying behavioral science to the workplace for quite a long time! In the 60s, Daniel Kahneman was building algorithms to improve the staffing of recruits in the Israeli Army: his approach inspired the Moneyball hiring system we are building where I work!

Behavioral science gives us two types of wisdom: the insights (i.e., what worked and what didn’t) and the process (i.e., that we can do our own testing to see what works and what doesn’t). With insights, we take what we learn and put it right into practice, which is good and bad! It’s good because we can get going right away, but it’s bad because we introduce a lot of risk by implementing something without checking first. When we take the process of scientific thinking into our company, we get some benefits that the insights alone can’t give us. Using the scientific process means we take a short time to pilot test our change vs the status quo before rolling it out company-wide, and we can look for backfiring effects and take the time to calculate the costs and benefits. This reduces our risk by learning what really works in advance. Of course, you can always take both!

I like the idea of learning what works in advance. Can you provide an example of how an HR team might use an evidence-based approach to solve a common HR challenge?

Coming back to the promotion example, we found in our historical company data that people rated favourably by their peers before being promoted did better after being promoted. This is a good insight and has some of the elements of a good process – we looked at ratings before the promotion and performance after, which means we know something about what’s happening over time. If we looked at ratings and promotion performance at the same time, we wouldn’t be as confident about the direction of the influence; maybe people who are promoted are liked better, and the influence flows the opposite way. So the process is good, but it could be better. We’ll talk about how a behavioral science approach (our third layer of evidence-based practice) can help.

Now, we take this insight about peer ratings and go to the scientific literature. There we might learn that peer ratings are not only affected by the employee who is being rated and, in our case, considered for promotion, they are also affected by the purpose of the ratings (as in, used for administrative decisions vs learning and development). This throws a bit of a wrench in our insights from the company data; were those ratings accurate? If the peers knew their answers affected people’s livelihoods, were they inflating their ratings? Maybe the ratings are part of the puzzle, but they aren’t the only explanation. Then we could look to the research to gather more insights; why are people doing this? What are the best predictors of success after a promotion? After gathering these insights, we can follow the process of science in our own company to see if the purpose of the ratings affects our raters or if the best predictors of success from the research show up in our results. Depending on the strength of the evidence, we may want to jump right in and implement, but there’s always a bigger risk there than if we tested it first.

What are some common fears or misunderstandings that HR practitioners have that might prevent them from taking a more evidence based approach?

This is such a great question! I’m working on an entire post about it, so here’s a sneak preview. A lot of it comes down to the fear of the unknown. We are much more comfortable with what we know, and with what we have been doing so far. I feel this the same as anyone; the domain I am more comfortable in is science, rather than sales. For others, it might be administrative tasks, or taking action right away, or benchmarking against other companies. I understand that, and that’s why putting evidence into a format everyone can understand, using their own language, can be so powerful.

Other than a fear of the unknown, there are misconceptions about science that stop us from seeing the connection between science and business. Many people I speak to think science needs a lab, complex machinery, and quirky, lone wolf types who no one can understand. But we can bust that myth right now: science doesn’t need a lab space or any expensive equipment. You can send one email to half your employees and another email to the other half – there you go, you just did science! Then you can tally up the responses you get and compare them – now you’ve done science and statistical analysis! Of course, this is a very basic example, but it doesn’t need to be much more complicated than this. There’s a whole Fast Company article about a similar experiment. As for the quirky lone wolf misconception, all of us scientists work together, and at least some of us are well-adjusted humans 🙂

The barriers to evidence-based management don’t only exist in HR practitioners’ heads; there are practical barriers to using behavioral science in business. Paywalls, academic jargon, 40+ page papers, expensive conferences, and esoteric statistics would make anyone turn and run. Fortunately, things are changing, but it is slow. The good news is that practitioners can co-create the future of science in the workplace, because scientists and practitioners need one another to tackle wicked problems (like the one below!).

Sexual harassment is top of mind for many of us in HR at the moment. How might an evidence-based approach help us to consider our organization’s practices related to SH?

Excellent example. I wish there was more research on this issue already. As I’m sure practitioners know, and anyone who has been following you would know, Jane, we aren’t where we should be in terms of tackling sexual harassment. Here are some steps, based on an evidence-based approach, that any practitioner can immediately use to diagnose their organization’s current state on sexual harassment:

1 – Define your Terms: What is sexual harassment? It seems simple, but we need to know what we’re looking for before we can find it. If the definition is already in your policies, great! Is it a complete definition? Does it consider the power dynamics between individuals? Is it sensitive to harassment that could be perpetrated by, and experienced by, any gender? After you’ve done this, you need an operational definition: How will you measure it? Does your operational definition capture everything from your original definition?

2 – Establish a Baseline: Look at your existing data related to this issue, if you have any. Can you organize the data into business functions? Seniority levels? Are there any clusters in the data? The only data skills you need right now are to look at descriptive statistics: this includes averages, making simple plots, and looking at subgroups. If you don’t have data, use this time to identify questions you need to answer, and what organizational data can get you there.

3 – Look for Barriers: Audit your current practices. Do you have a training program? If so, what does it entail? Do you have a disciplinary process? What steps does it have? Is it being implemented consistently? Go through this with each of your HRM practices, as sexual harassment can impact your selection, promotion, retention, and other stages.

4 –Review the Science: Look to the scientific literature. Finding published work on sexual harassment is as easy as going to scholar.google.com, typing “sexual harassment workplace”, and clicking on the results. One paper I found by following those three steps explains some personal and organizational predictors of harassment, as reported by people who perpetrated those behaviors. There a few challenges with this research, specifically that the person who did these acts is responding about them and that the paper doesn’t show us the averages for any of the metrics they collected. But it’s a starting point; it’s helpful to know whether workplace sanctions or other characteristics make a difference.

5 – Test and Iterate: Rigorously evaluate your own practices, if you have them, and test new interventions if possible. For example, if you have a sexual harassment training program, you can compare people randomly assigned to the training to those who have not done the training and ask them about their attitudes towards harassing behaviors at work, whether it’s wrong for a person in power to pursue an intimate relationship with someone without formal power, and measure their behaviour over time. Based on this comparison, you can change the training (bystander training has shown some success) or try a different approach. Step five is where you would conduct a scientific experiment, but steps 1-4 are still important to improve the chances of running a good experiment in step 5.

Much of this process is common sense, but there are innovative solutions you can draw from in the behavioral science research. These tactics, called nudges because they leave you free to choose while designing the environment with a behavioral focus, have solved many workplace challenges in the past, including retirement savings, compliance, and debiasing in recruitment. These are scientific insights of the kind I referred to earlier, but the process of science is also quite innovative – it forms the backbone of the agile process, and by constantly retooling what doesn’t work, HR can use this system to align with changing demands.

You just started a blog at natashaouslis.com; congratulations! I want to share a link to a fantastic slide deck you built and featured in your inaugural post that summarizes a century of research on effective hiring practices. Can you give readers a hint about what it says?

Thank you very much! We all want to find applicants who will succeed in our organization, but the practices we’re using get in our way. For example, years of education and reference checks are very weak at predicting an employee’s future performance. I walk viewers through the business case for these faster, cheaper, and more powerful hiring processes and bust some myths about emotional intelligence and group differences on the way!

Thanks to Natasha, who took the time to share her clear and compelling thoughts despite a busy schedule. This was so informative.

Read This Week

Anger and Inequality – James Elfer

A topic frequently overlooked (or purposely avoided) is anger in the workplace. James’ nuanced and thoughtful post starts with the assertion: “We should be more open to anger in the workplace”. You should read this.

“Anger has value. It is also a likely emotional response to discrimination. But those facing this injustice on a systematic scale are also the people at greatest risk of stereotyping. So if they speak up in anger – an emotion we are all likely to experience when confronted with unfairness – they face social penalty. At work, this includes threats to income, career opportunity and status as outlined above.

Recently, we’ve witnessed the power of this type of dynamic – it has hidden abuse on a systematic scale. But we have also seen its weakness – it can be beaten when anger meets community, transparency and social strength… it’s #MeToo; it’s the Oxfam scandal brought to light; it’s Enora at the Women’s March:

“It feels good to be here, like something inside us was allowed to rest, and grow fiercer at the same time.””

How to Rands – Michael Lopp, Medium

I’m a big fan of Michael Lopp, otherwise known as Rands, a prolific writer on management and currently VP of Engineering at Slack. This is his ‘user guide’ to himself as a manager and it’s awesome in its openness and self-awareness.

“One of the working relationships we need to define is ours. The following is a user guide for me and how I work. It captures what you can expect out of the average week, how I like to work, my north star principles, and some of my, uh, nuance. My intent is to accelerate our working relationship with this document.

Disagreement is feedback and the sooner we learn how to efficiently disagree with each other, the sooner we’ll trust and respect each other more. Ideas don’t get better with agreement.

Ask assertive versus tell assertive. When you need to ask me to do something, ask me. I respond incredibly well to ask assertiveness (“Rands, can you help with X?”). I respond poorly to being told what to do (“Rands, do X.”) I have been this way since I was a kid and I probably need therapy.”

How to Lose Your Job from Sexual Harassment in 33 East Steps – Deborah Copeken, The Atlantic

This is why having a policy and checking all the compliance boxes isn’t enough.

“Editor’s note: When reached for comment, Kurson told The Atlantic, “Deborah is a terrific writer. I published her often and wish her nothing but the best.” James Karklins, the president of Observer Media, provided The Atlantic with the following statement: “Observer Media strives hard to provide a safe and comfortable environment for our employees and has a process by which all formal complaints are taken seriously. Our policy is to take corrective action when evidence of misconduct is presented. During Ken Kurson’s employment there were no formal complaints received. Deborah Copaken was a freelance writer and never an employee of the company. As with all freelance writers each story is pitched and accepted on a case-by-case basis.” Karklins confirmed to The Atlantic that there is no process for writers who aren’t on staff to file complaints.”

I’m All Over the Interwebs This Week!

  • On the Actionable.co blog writing about a less-obvious remote work challenge: employee anxiety
  • Answering a good question over at CollageHR: “I accidentally yelled at a new employee. How can I make amends?”
  • Being interviewed by Bill Banham on the HRChat podcast about my upcoming InnovateWorkTO talk , #MeToo and other stuff

Photo by João Silas on Unsplash

 

2 Comments Post a comment
  1. Reblogged this on Natasha Ouslis and commented:
    Thanks so much to Jane for this wonderful feature!

    March 18, 2018

Trackbacks & Pingbacks

  1. 2018 Writing Reflections | Talent Vanguard

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.