What is a central tendency bias error?

Central tendency bias refers to a tendency for raters, or managers to evaluate most of their employees as "average" when they apply a rating scale. So, for example, given a scale that runs with points on it that run from one (poor) to seven (excellent), with four being the average, many managers will refuse to use the points at either of the ends. There will be a tendency for almost all ratings to fall

within the 3-5 range. This can be problematic since a very poor employee may be rated slightly above average even though this rating is inaccurate, or, on the other side, a superior employee may be rated in that same 3-5 range even though he or she deserves a more excellent rating.

Shorter rating scales (e.g. those with only three points, rather than seven) tend to cause less central tendency bias, but they also become less exact.

You've probably heard managers say, "I never rate people as excellent." This is an example of central tendency bias.

Central tendency bias is a serious data collection problem. It is the tendency to not give an extreme answer, and instead pick an answer that is closer to the center of the options. Central tendency bias is often seen in things like subjective grading – teachers (or employees, or a customer’s appraisal of a product) rarely want to claim that someone has already mastered something, and so they avoid giving a perfect score for anyone. Similarly, most people rarely give completely negative scores, because they may give someone a feeling of hopelessness.

Avoiding these extreme responses makes your data less meaningful, because it groups your means closer together. Central tendency bias is also the result of having multiple questions on a survey, which has shown a tendency to cause people to choose the less extreme answers.

Tips to Avoid Central Tendency Bias

First and foremost, shortening your survey has shown the ability to reduce central tendency in the results. However, that may not be feasible, in which case you should consider the following ideas:

One idea is to force comparable ratings, rather than using a scale with the same responses on each. For example, rating a feature in terms of priority to the customer in terms of 1, 2, 3 all the way until the last priority item. Forcing people to choose a priority level means that something (or someone) is going to be ranked the “best,” thereby ensuring that two items that are unequal in value are not seen as equal as a result of this bias.

Another thing you can do is change the way your questions are asked. Often (but not always) central tendency bias occurs because the questions are seen all in a row, and cause people to eventually lose interest in the idea of giving out an extreme score. Mixing the questions up to help them appear more interesting to the person filling them out may reduce this problem.

Avoiding Central Tendency

You should be able to find out if there is a central tendency bias by testing the survey beforehand. It’s important that you discover if this bias exists so that you can take the necessary steps to stop it, because your data is only as useful as it is accurate, and if this bias persists, you are going to collect data that may not represent the respondent’s true feelings.

  • Alais, D., & Burr, D. (2004). Ventriloquist effect results from near-optimal bimodal integration. Current Biology, 14(3), 257–262.

    PubMed  Google Scholar 

  • Ashourian, P., & Loewenstein, Y. (2011). Bayesian inference underlies the contraction bias in delayed comparison tasks. PLOS ONE, 6(5), e19551.

    PubMed  PubMed Central  Google Scholar 

  • Aston, S., Pattie, C., Beierholm, U., & Nardini, M. (2020). Failure to account for extrinsic visual noise leads to suboptimal multisensory integration. Journal of Vision, 20(11), 880.

    Google Scholar 

  • Bejjanki, V.R., Knill, D.C., & Aslin, R.N. (2016). Learning and inference using complex generative models in a spatial localization task. Journal of Vision, 16(2016), 1–13.

    Google Scholar 

  • Berniker, M., Voss, M., & Kording, K. (2010). Learning priors for Bayesian computations in the nervous system. PLoS ONE, 5(9), 1–9.

    Google Scholar 

  • Chambers, C., Sokhey, T., Gaebler-Spira, D., & Kording, K.P. (2018). The development of Bayesian integration in sensorimotor estimation. Journal of Vision, 18(12), 8.

    PubMed  PubMed Central  Google Scholar 

  • Cicchini, G.M., Arrighi, R., Cecchetti, L., Giusti, M., & Burr, D.C. (2012). Optimal encoding of interval timing in expert percussionists. The Journal of Neuroscience, 32(3), 1056 LP – 1060.

    Google Scholar 

  • Corbin, J.C., Crawford, L.E., & Vavra, D.T. (2017). Misremembering emotion: Inductive category effects for complex emotional stimuli. Memory & Cognition, 45(5), 691–698.

    Google Scholar 

  • Crawford, L.E. (2019). Reply to Duffy and Smith’s (2018) reexamination. Psychonomic Bulletin & Review, 26(2), 693–698.

    Google Scholar 

  • Crawford, L.E., Huttenlocher, J., & Engebretson, P.H. (2000). Category effects on estimates of stimuli: Perception or reconstruction? Psychological Science, 11(4), 280–284.

    PubMed  Google Scholar 

  • Duffy, S., Huttenlocher, J., Hedges, L.V., & Crawford, L.E. (2010). Category effects on stimulus estimation: Shifting and skewed frequency distributions. Psychonomic Bulletin and Review, 17(2), 224–230.

    PubMed  Google Scholar 

  • Duffy, S., & Smith, J. (2018). Category effects on stimulus estimation: Shifting and skewed frequency distributions—A reexamination. Psychonomic Bulletin & Review, 25(5), 1740–1750.

    Google Scholar 

  • Duffy, S., & Smith, J. (2020a). Omitted-variable bias and other matters in the defense of the category adjustment model: A comment on Crawford (2019). Journal of Behavioral and Experimental Economics, 85, 101501.

    Google Scholar 

  • Duffy, S., & Smith, J. (2020b). On the category adjustment model: another look at Huttenlocher, Hedges, and Vevea (2000). Mind & Society, 19(1), 163–193.

    Google Scholar 

  • Ernst, M.O., & Banks, M.S. (2002). Humans integrate visual and haptic information in a statistically optimal fashion. Nature, 415(6870), 429–433.

    PubMed  Google Scholar 

  • Hillis, J.M., Ernst, M.O., Banks, M.S., & Landy, M.S. (2002). Combining sensory information: Mandatory fusion within, but not between, senses. Science, 298(5598), 1627–1630.

    PubMed  Google Scholar 

  • Hollingworth, H.L. (1910). The central tendency of judgment. The Journal of Philosophy, Psychology and Scientific Methods, 7(17), 461–469.

    Google Scholar 

  • Huttenlocher, J., Hedges, L.V., & Vevea, J.L. (2000). Why do categories affect stimulus judgment? Journal of Experimental Psychology: General, 129(2), 220–241.

    Google Scholar 

  • Jamieson, D.G. (1977). Two presentation order effects. Canadian Journal of Psychology, 31(4), 184–194.

    PubMed  Google Scholar 

  • Jazayeri, M., & Shadlen, M.N. (2010). Temporal context calibrates interval timing. Nature Neuroscience, 13(8), 1020–1026.

    PubMed  PubMed Central  Google Scholar 

  • Jones, S.A., Beierholm, U., Meijer, D., & Noppeney, U. (2019). Older adults sacrifice response speed to preserve multisensory integration performance. Neurobiology of Aging, 84, 148–157.

    PubMed  Google Scholar 

  • Kiryakova, R.K., Aston, S., Beierholm, U.R., & Nardini, M. (2020). Bayesian transfer in a complex spatial localization task. Journal of Vision, 20(6), 17.

    PubMed  PubMed Central  Google Scholar 

  • Knill, D.C., & Saunders, J.A. (2003). Do humans optimally integrate stereo and texture information for judgments of surface slant? Vision Research, 43(24), 2539–2558.

    PubMed  Google Scholar 

  • Kȯrding, K. P., & Wolpert, D.M. (2004). Bayesian integration in sensorimotor learning. Nature, 427(6971), 244–247.

    PubMed  Google Scholar 

  • Krügel, A., Rothkegel, L., & Engbert, R. (2020). No exception from Bayes’ rule: The presence and absence of the range effect for saccades explained. Journal of Vision, 20(7), 15.

    PubMed  PubMed Central  Google Scholar 

  • Laquitaine, S., & Gardner, J.L. (2018). A switching observer for human perceptual estimation. Neuron, 97(2), 462–474.e6.

    PubMed  Google Scholar 

  • Murai, Y., & Yotsumoto, Y. (2018). Optimal multisensory integration leads to optimal time estimation. Scientific Reports, 8(1), 13068.

    PubMed  PubMed Central  Google Scholar 

  • Negen, J., Chere, B., Bird, L. -A., Taylor, E., Roome, H.E., Keenaghan, S., ..., Nardini, M. (2019). Sensory cue combination in children under 10 years of age. Cognition, 193, 104014.

    PubMed  Google Scholar 

  • Norton, E.H., Acerbi, L., Ma, W.J., & Landy, M.S. (2019). Human online adaptation to changes in prior probability. PLoS Computational Biology, 15(7), e1006681.

    PubMed  PubMed Central  Google Scholar 

  • Odegaard, B., Beierholm, U.R., Carpenter, J., & Shams, L. (2019). Prior expectation of objects in space is dependent on the direction of gaze. Cognition, 182, 220–226.

    PubMed  Google Scholar 

  • Olkkonen, M., & Allred, S.R. (2014). Short-term memory affects color perception in context. Plos One, 9(1), 1–11.

    Google Scholar 

  • Olkkonen, M., McCarthy, P.F., & Allred, S.R. (2014). The central tendency bias in color perception: Effects of internal and external noise. Journal of Vision, 14(11), 1–15.

    Google Scholar 

  • Oruç, I., Maloney, L.T., & Landy, M.S. (2003). Weighted linear cue combination with possibly correlated error. Vision Research, 43(23), 2451–2468.

    PubMed  Google Scholar 

  • Plummer, M. (2003). JAGS: A Program for Analysis of Bayesian Graphical Models using Gibbs Sampling. In 3rd International Workshop on Distributed Statistical Computing (DSC 2003); Vienna, Austria p 124.

  • Rahnev, D., & Denison, R.N. (2018). Suboptimality in perceptual decision making. Behavioral and Brain Sciences, 41, e223.

    Google Scholar 

  • Riskey, D.R., Parducci, A., & Beauchamp, G.K. (1979). Effects of context in judgments of sweetness and pleasantness. Perception & Psychophysics, 26(3), 171–176.

    Google Scholar 

  • Roberson, D., Damjanoviv, L., & Pilling, M. (2007). Categorical perception of facial expressions: Evidence for a “category adjustment” model. Memory and Cognition, 35(7), 1814–1829.

    PubMed  Google Scholar 

  • Ryan, L.J. (2011). Temporal context affects duration reproduction reproduction. Journal of Cognitive Psychology, 23(1), 157–170.

    Google Scholar 

  • Scarfe, P. (2020). Experimentally disambiguating models of sensory cue integration. bioRxiv, 2020.09.01.277400.

  • Scarfe, P., & Hibbard, P.B. (2011). Statistically optimal integration of biased sensory estimates. Journal of Vision, 11(7), 12.

    PubMed  Google Scholar 

  • Sciutti, A., Burr, D., Saracco, A., Sandini, G., & Gori, M. (2014). Development of context dependency in human space perception. Experimental Brain Research, 232(12), 3965–3976.

    PubMed  Google Scholar 

  • Tassinari, H., Hudson, T.E., & Landy, M.S. (2006). Combining priors and noisy visual cues in a rapid pointing task. Journal of Neuroscience, 26(40), 10154–10163.

    PubMed  Google Scholar 

  • Vilares, I., Howard, J.D., Fernandes, H.L., Gottfried, J.A., & Kording, K.P. (2012). Differential representations of prior and likelihood uncertainty in the human brain. Current Biology, 22(18), 1641–1648.

    PubMed  Google Scholar 

Page 2

Example responses from six hypothetical observers making perceptual judgements to illustrate the potential issues around calculating sensory variability from continuous responses. In the top row are three observers who do not apply a central tendency bias to their responses, but have varying levels of sensory uncertainty (variabilities of 4, 16, and 36 degrees from left to right). In the bottom row, the observers have the same level of sensory uncertainty as the hypothetical observers above them, but also apply a central tendency bias to their responses

Toplist

Latest post

TAGs