Challenges and Opportunities with Optimal HR Decision Making and How Advanced Analytics Can Help
- Sep 23, 2013
- 1.1. How We Make Decisions and What Gets in the Way
- 1.2. Rise of the Machines: Advanced Analytics and Decision Making
- 1.3. Human and Machine: The Ideal Decision-Making Team
1.1. How We Make Decisions and What Gets in the Way
In their book Nudge, economist Richard Thayler and legal scholar Cass Sunstein describe homo economicus and homo sapiens. Homo economicus are humans as they are described in economics textbooks. They act and make decisions completely rationally, have the computing power of a hundred super computers, and they always know precisely what will make them happy. Homo sapiens, however, do things like jump out of perfectly good airplanes, forget significant others’ birthdays, and occasionally drink or eat too much. Thaler and Sunstein refer to homo economicus as econs and refer to the rest of us as humans.1
Remarkably, until relatively recently, even in light of nearly unlimited anecdotal and empirical evidence, we assumed our decision making was almost always rational and optimal. It was not until the ground-breaking work of those like Thayler, Daniel Kahneman, Amos Tversky, Robyn Dawes, Daniel Ariely, and many others that this fundamental assumption of rationality was largely undone. Probably the fatal blow to the idea that we always decide rationally was delivered by Kahnman and Tversky.2 “Econs” have long been assumed to “maximize their utility”; this requires that they have a very clear idea of preferences. Work by Tversky and Kahneman provide evidence of a “framing effect.”3 This finding shows that our preferences and subsequent decisions will be impacted depending on how the information is presented.
Relative to human capital management (HCM) decisions, this may mean that someone is rejected for an interview based on the letter font used on his curriculum vitae (CV) or resumé. It might not be a conscious decision; the reviewer may just equate a particular style with professionalism. Though most would agree presentation matters, making a decision to not interview someone based on one data point, and that data point being a preference for Times New Roman over Cambria, could be considered less than ideal. This matters because the sum total of all the small and large HCM decisions will make or break an organization. Who we hire and promote, how we compensate and motivate people, the type of training they receive—these decisions have a direct and identifiable impact on the success of the organization.4
Though there is an ongoing debate about just how rational we really are,5 there is agreement that we are often pushed toward acting irrationally,6 even when rational action would lead to the best outcomes. I conduct empirical research, and the research questions that interest me evolve around this question: What works at work? For example, does giving employees more decision-making authority lead to better firm performance? Does the executive compensation plan provide an incentive to actually improve performance?
One topic on which I have done a fair amount of research is the granting of stock options to nonexecutive employees.7 From the perspective of standard rational economic theory, this is really a foolish thing to do. Economic theory would say that granting stock options to anyone other than the top few employees is about as sensible as burning the options. The primary theoretical lens used to justify granting company shares to employees is called agency theory, and although it provides a very good rationale for the granting of stock options to executives, it provides a very poor one for granting to nonexecutives.8 Based on agency theory, there is no reason to expect giving stock options to nonexecutive employees will motivate them to work harder, smarter, or longer, because their individual efforts have very little impact on the share price. However, surprisingly, initially even to me, giving stock options to nonexecutive employees seems to do just that. We have repeatedly found evidence that giving stock options to a broad set of employees (in some cases, everyone in the firm) increases productivity and other performance outcomes.9 So, this would argue that in this instance, employees are not acting as one would expect econs to act. Instead of making people work harder because they think their work can move the share price, they appear to be working harder because of some completely different reason.
A detailed exploration of what is driving those behaviors is beyond our scope here, but it may be that broad-based stock options create a culture of engagement. Stock options may go some way toward establishing a workplace where there is an attitude that we are all in this together, and maybe this is what causes employees to work harder, smarter, longer, or more collaboratively.10 What this means is that when we are attempting to predict how people are actually going to respond, the rationale model is not of much use. (Like it or not, our default assumption is often that people will respond rationally.) It also means that our predictive models need to incorporate new findings from behavioral economics, psychology, and neuroeconomics.
In an interview conducted in the Sloan Management Review, Thomas Davenport, who, along with Jeanne Harris, has written extensively on analytics, said that he thought many great tools were being underutilized.11 In the article, Davenport went on to say that not only was he referring to structured and unstructured data but also to the insights on decision making that could be found in the “wisdom of crowds,” “behavioral economics,” and “neuroscience.” This section explores a number of the factors that impact the quality of our decision making.
1.1.1. Intuition Versus Analytical Thinking
The fact that we do not decide rationally is not to suggest that there is anything wrong with the way our brains work; after all, it is our minds that came up with things like language, the written word, chocolate-covered peanuts (significant and important things). Daniel Kahneman’s, notion of thinking fast and slow and Thayler and Sunstein’s System 1 and System 2 cover the important characteristics of how we think. Thinking fast is essentially making decisions based on intuition, and thinking slow, as the name implies, refers to making decisions based primarily on analytical evaluation. Kahneman also uses the terms System 1 and System 2 thinking. System 1 thinking is our intuition—those thoughts, feelings, impressions, associations, and preparations for action that all happen automatically and fast (for example, chatting with friends or brushing our teeth). System 2 thinking, reflective thinking, is by contrast slow and deliberate, thoughtful and effortful. This is the type of thinking we engage in when rule-based logic is required or when, for example, we are completing our taxes or learning a new skill. Examples of situations where we think fast include the following:12
- Detect that one object is more distant than another
- Detect hostility in a voice
- Understand simple sentences
At other times, our thinking needs to slow considerably, as in the following examples:13
- Teaching someone a new skill
- Filling out a survey
- Checking the validity of a complex logical argument
Basing decisions solely on intuition can be problematic. Making hiring, promotion, and bonus decisions based on gut instinct carries with it the potential for including a lot of bias and incomplete information. The fact is that most workforce management decisions are rife with potential biases, and making these decisions with the assistance of analytics can help eliminate many of these biases. This is not to say that there is no place for “expert” intuitive knowledge. The use of stock options is an example. Based purely on a rational model of decision making, no firm would ever issue stock options to anyone other than the two or three top employees who may have the power to move the share price.
Silicon Valley, the undisputed epicenter of worldwide technological innovation, was one of the first to recognize how broadly distributed stock options could help motivate and retain employees.14 In fact, some say that stock options provide the fuel that powers Silicon Valley.15 Frankly, Silicon Valley might never have existed (and so some of the world’s greatest innovations might not have happened) if those making HCM decisions had thought like econs and assumed everyone else did too.
What you want to keep in mind here is that although there is a critical role for intuition (that is, paying attention to your gut), it is almost always advisable to temper decisions with analytics. Generally speaking, many of the decisions associated with HCM have considerable potential for bias. Consequently, the ideal approach is one that combines the best analytics with well-seasoned human expertise.
1.1.2. Poor Intuitive Statisticians
Another critical realization is that we are really lousy statisticians. In the introduction to his book, Kahneman recounts the story of the first research project that he and Tversky undertook. They wanted to determine how good we are as intuitive statisticians. So, they developed and administered a survey at a meeting for the Society of Mathematical Psychology; participants included those who had authored statistical textbooks.16 Even those with years of training and expertise were not good at predicting the probability of an event. Those with substantial training in statistics were prone to accept research that was based on small sample sizes and also gave a hypothetical graduate student inaccurate advice regarding the number of observations she would have to collect. This matters because we are constantly accessing the probability of an event occurring (for example, the probability that an employee will perform as expected, the likelihood that a specific compensation approach will promote desirable outcomes). Fortunately, there is a fix, or at least a fairly robust solution. Data coupled with a good idea of the factors influencing an outcome, along with some pretty straightforward statistics, will go a long way toward predicting a likely outcome.
1.1.3. Understanding Human Nature
In a book about advanced analytics, it might strike you as odd that I will also be emphasizing the critical role that human intuition plays in decision making. I emphasize this because a number of constraints apply to advanced analytics when attempting to predict how people are actually going to act. Take, for example, stock options. Any model that expects rational behavior would expect no incentive effect associated with their use. (For example, individuals should not work longer, harder, or smarter.) However, that is not what we observe. People do actually work much harder. The more we understand how people think and act and what is important and what motives them, the greater the likelihood that we can accurately predict behaviors. Much new evidence from the natural and social sciences helps us better understand human nature; the same holds true for the humanities. For instance, experimental philosophy is empirically testing many basic assumptions about how we experience and relate to the world.17 We delve into the implications of these new findings in subsequent chapters.
1.1.4. Biases and Decisions
One of the most critical factors influencing our decision making is our own biases. These are not something that we are generally even consciously aware of. However, they adversely impact our decisions making. A number of biases are especially troublesome when making HCM decisions, including the following:18
- Confirmation bias: This bias causes us to ignore evidence that undermines a preconceived idea. For instance, we may be convinced that someone is the person for the job even after much evidence to the contrary.
Anchoring: We have a tendency to focus on data points that we consider to be especially telling. For instance, when making hiring decisions, college grade point average may weigh heavily, even though it has not been shown to be a good predictor of job performance.
Anchoring refers to our tendency to weigh this one data point too greatly when making decisions.
- Loss aversion: This bias refers to our tendency to weigh potential losses greater than potential gains. We come by this bias honestly; there is an evolutionary advantage to focus on potential threats (hungry predators) rather than focusing on long term planning.
- Status quo: This bias is the tendency to go along with the status quo or the default option.19
- Framing: You can find an excellent example of framing in an article by Paul J. H. Schoemaker and J. Edward Russo.20 Managers were asked what how they would respond to the following situation:
“Assume you are the vice president of manufacturing in a Fortune 500 company that employs over 130,000 people with annual sales exceeding $10 billion. Due to the recession as well as structural changes in your industry, one of your factories (with 600 employees) is faced with either a complete or partial shutdown. You and your staff carefully narrowed the options to either:
- A. Scale back and keep a few production lines open. Exactly 400 jobs will be lost (out of 600).
- B. Invest in new equipment that may or may not improve your competitive position. There is a 1/3 chance that no jobs will be lost but a 2/3 chance all 600 jobs will be lost.
Financially, these options are equally attractive (in expected rate of return). The major difference is the effect of the decision on the plant workers, who have stood by the company for many hard years without unionizing. Which option would you choose if these were your only alternatives?”
The exercise is repeated and this time the options are slightly reworded.
- A. “Scale back and keep a few production lines open. Exactly 200 jobs will be saved (out of 600 threatened by layoff).
- B. Invest in new equipment that may or may not improve your competitive position. There is a 1/3 chance all jobs will be saved but a 2/3 chance that none of the 600 jobs will be saved.”21
Tellingly, when “framed” in the first example, most managers choose option A. When framed by the second, most managers choose the opposite.
These and other biases that are discussed in later chapters all serve to undermine the quality of many decisions generally and HCM decisions specifically.
1.1.5. Big Data and Information Overload
We are in the age of very, very big data. Just how big? Pretty big. Table 1.1 describes various quantities of bytes.22
Table 1.1 Byte Measurements
The amount of data in “big data” is simply staggering. There are roughly one billion transistors per person and four billion cell phone users.23 According to Gartner, the amount of information is growing at 59% annually,24 and much of this information is unstructured data in the form of video, social media, blogs, and so on. There is simply too much information for our brains to process adequately. The brain itself can be thought of as a tremendous data producing mechanism, given that it contains 85 to 100 billion neurons and produces roughly 300,000 petabytes of data each year.25 For some time now, we have had more information than we can process, and the ongoing exponential increase in information (information explosion) exacerbates this situation. One place where computers have us beat is in processing tremendous amounts of information very, very fast.
1.1.6. The Problem with Certitude
During dinner once with a former colleague and her husband, Raiders of the Lost Ark came up as we were talking about movies. We started discussing the scene in which Marian (played by Karen Allen) won a drinking game in the bar she owned. My former colleague was absolutely certain that the person she drank under the table was Indiana Jones (Harrison Ford). Raiders of the Lost Ark was one of my favorite movies, so I knew differently. I told her that it was actually some otherwise unknown local, not Indy. So certain that she was right, she said that she would bet her house it was Jones. The words of some wise sage popped into my head: “If someone offers you a perfectly good house, take it.” So, I took the bet, and we headed down to the local video rental store. However, I was starting to have mixed feelings about actually taking their house, so I told them that I would be happy to let them off the hook and drop the bet. This elicited some pretty dodgy accusations about my stomach for betting. So, as long as they insisted.... Before watching the movie, I asked my former colleague (who is extremely bright and one of the top academics in her field) what she considered to be the probability of her being correct. She said 99.9999%. In other words, she was sure that she was right, really sure. Anyone who has seen the movie and remembers that scene will know that I won a house. In case you are interested, I let them stay in their home, but I was not above occasionally asking whether they were taking good care of my property. I am not sharing this story to spotlight my movie knowledge. Instead, I want to point out that just because we really, really think we are right does not mean that we necessarily are. And trust me, I have been guilty of this more than once.
1.1.7. Advanced Analytics Does Not Care Who It Annoys
Unfortunately, some in positions of authority have fragile egos or are primarily concerned with advancing their own agenda rather than dealing with actual facts. Hiring yes men and yes women is simply a losing proposition. Warren Buffett, for instance, goes out of his way to seek out people to tell him that he is wrong, and many (if not all) successful organizations never become self-satisfied. One of the big advantages of advanced analytics is that it is entirely immune to big egos, group think, and the loudest getting their way.
Evolution has favored those who are good at advancing an argument, whether or not the argument is based on fact, and so we come by our opinionated natures honestly. The challenge arises when the focus shifts from getting to the truth of the matter to winning the argument instead. Of course, we hope, those who are right win. Unfortunately, though, the evidence indicates that this is not always the case. The April 2011 issue of the Journal of Behavioral and Brain Sciences was devoted to the theory of argumentative reasoning.26 The theory holds that we developed rationality not as a result of our desire to pursue philosophical and scientific insight and to develop a superior morality, but rather we developed it to win arguments. When it comes to winning arguments, what matters is certitude—knowing, or at least projecting, that you are certain you are right. Those skilled at winning arguments are advancing arguments rather than looking for the truth. All too often, therefore, “cherry picking” of the facts takes place. Here is where more sophisticated analytical models can play a critical role.
Philip Tetlock convincingly advises that we should consider expert advice with caution. Over a 20-year period, Tetlock followed the forecasts of 284 experts who were professional predictors of political and economic trends. He asked them to rate the probability of three different possible outcomes: no change in the current situation or either an increase or decrease in a factor like economic growth. He discovered that the experts with many years of experience and Ph.D.s were roughly as accurate as dart-throwing monkeys.27 This is in no way meant to disparage the advice of all experts; after all, forecasting the future is a difficult thing. However, it is sensible to view most prognostications cautiously.
In his book Streetlights and Shadows, the psychologist Gary Klein, states the following:
- I am saddened to see ineffective decision-support systems that are designed in accordance with ideology rather than observation. If we try to balance the human as hazard model with the human as hero model, and to balance the automatic, intuitive system with the reflective, analytical system, we should have more of a chance to create decision-support systems that will get used.28
The tools and processes discussed in the rest of this book will attempt to just that: combine both the intuitive and analytical to provide us with the best possible decision.
1.1.8. Types of Decision Making
Hoch and Kunreuther propose three different levels from which decision making can be viewed:29
- Normative: The normative approach holds, for example, that we would be better served by making decisions based on rationality.
- Descriptive: The descriptive level describes what we actually observe about how decisions are made.
- Prescriptive: Prescriptive recommendations focus on improving decision making.
Much decision science research and work is tied to formal mathematical models. Recently, however, cognitive approaches to decision making have been a focus. This discussion adopts a prescriptive approach to our evaluation of the various factors that impact decision making and the technologies that can influence desirable outcomes.