Drawing by James Stevenson.
Here’s a simple arithmetic question: A bat and ball cost a dollar and ten cents. The bat costs a dollar more than the ball. How much does the ball cost?
這是一個簡單的算術問題:購買一支球棒和一顆球要花費1美元10美分,已知球棒比球要貴1美元,那麼一顆球需花費多少?
The vast majority of people respond quickly and confidently, insisting the ball costs ten cents. This answer is both obvious and wrong. (The correct answer is five cents for the ball and a dollar and five cents for the bat.)
大多數人會迅捷而自信地回答說一顆球要花費10美分。而這樣的回答明顯是錯誤的。(正確答案是一顆球5美分,一支球棒1美元又5美分。)
For more than five decades, Daniel Kahneman, a Nobel Laureate and professor of psychology at Princeton, has been asking questions like this and analyzing our answers. His disarmingly simple experiments have profoundly changed the way we think about thinking. While philosophers, economists, and social scientists had assumed for centuries that human beings are rational agents—reason was our Promethean gift—Kahneman, the late Amos Tversky, and others, including Shane Frederick (who developed the bat-and-ball question), demonstrated that we’re not nearly as rational as we like to believe.
Kahneman 教授設計出一套簡單實驗且深遠地影響了大家的思維模式。數個世紀下來,當哲學家、經濟學家和社會學家認為人類是自然界中合理的管理者——源自於普羅米修士的禮物; Kahneman 、後來的 Amos Tversky 及其他不認同此說的人,包括 Shane Frederick (設計出棒球與球問題的作者)都提出人類與所謂的的‘萬物主宰’相去甚遠。
When people face an uncertain situation, they don’t carefully evaluate the information or look up relevant statistics. Instead, their decisions depend on a long list of mental shortcuts, which often lead them to make foolish decisions. These shortcuts aren’t a faster way of doing the math; they’re a way of skipping the math altogether. Asked about the bat and the ball, we forget our arithmetic lessons and instead default to the answer that requires the least mental effort.
當人們遇到一種不確定的情況時,通常都不會審慎時度或查找分析相關案例,恰恰相反地,人類的選擇乃由急功近利的心理活動所驅使,因而作出一些愚蠢的決定。這類走捷徑的心態並不算一種高效方法;這不過是跳過了精密的數學計算而草率結論罷了。
Although Kahneman is now widely recognized as one of the most influential psychologists of the twentieth century, his work was dismissed for years. Kahneman recounts how one eminent American philosopher, after hearing about his research, quickly turned away, saying, “I am not interested in the psychology of stupidity.”
儘管 Kahneman 是現在是公認的20世紀裏最具影響力的心理學家,然而他的學說成就卻默默無聞。 Kahneman 曾對一位聲名顯赫的美國哲學家詳細解釋他的研究成果,但這位美國哲學家一聽完他的研究報告就轉身離開,並說道“我才沒興趣聽這愚蠢荒誕的心理學。”
The philosopher, it turns out, got it backward. A new study in the Journal of Personality and Social Psychology led by Richard West at James Madison University and Keith Stanovich at the University of Toronto suggests that, in many instances, smarter people are more vulnerable to these thinking errors. Although we assume that intelligence is a buffer against bias—that’s why those with higher S.A.T. scores think they are less prone to these universal thinking mistakes—it can actually be a subtle curse.
可是後來的事實證明:那位哲學家完全想錯了。最近,一份由 West 和 Stanovich 合力撰寫的報告,被發表在《人格與社會心理學週刊》。報告稱聰明人在很多方面都更容易走進這種思維誤區。儘管我們認為:智力是反對偏見的制衡緩衝器——這也是為什麼那麼多 SAT 高分群,認為自己不容易犯常識性的錯誤——反而成為一種微妙的詛咒。
West and his colleagues began by giving four hundred and eighty-two undergraduates a questionnaire featuring a variety of classic bias problems. Here’s a example:
In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake?
West 和同事以隨機抽查4百82位大學生,進行多種導致偏見的經典問卷。以下是其中一個例子:在一座湖泊裏,生長著一片睡蓮葉。睡蓮葉的面積每天都增加一倍。假設這片睡蓮葉需要48天才能覆蓋整個湖面,那麼這片睡蓮葉將需要多少天才能覆蓋湖面的一半?
Your first response is probably to take a shortcut, and to divide the final answer by half. That leads you to twenty-four days. But that’s wrong. The correct solution is forty-seven days.
想必你的第一反應是儘可能地偷工減料,將最後的結果對半除,答案是24天,可惜這是錯的,正確答案是47天。
West also gave a puzzle that measured subjects’ vulnerability to something called “anchoring bias,” which Kahneman and Tversky had demonstrated in the nineteen-seventies. Subjects were first asked if the tallest redwood tree in the world was more than X feet, with X ranging from eighty-five to a thousand feet. Then the students were asked to estimate the height of the tallest redwood tree in the world. Students exposed to a small “anchor”—like eighty-five feet—guessed, on average, that the tallest tree in the world was only a hundred and eighteen feet. Given an anchor of a thousand feet, their estimates increased seven-fold.
West 還提出另一個謎題--關於測量主體的漏洞--稱為“錨點偏見”,由 Kahneman 和 Tversky 於1970年代提出。測驗者首先需知:假設世界上最高的紅木為X多英尺高,而這個 X 變數的可能幅度為 85英尺至1000英尺 。接著學生被問到這棵世上最高的紅木可能有多高。學生們傾向于在有效數值內設定一個基準點,然後再朝著最小數值的可能性估算——以 85英尺為基準——估算出的平均值可能是認為世上最高的數有 118英尺高;而若給出的基準數值為 1000英尺,那麼他們的估算結果可能會翻上7倍。
But West and colleagues weren’t simply interested in reconfirming the known biases of the human mind. Rather, they wanted to understand how these biases correlated with human intelligence. As a result, they interspersed their tests of bias with various cognitive measurements, including the S.A.T. and the Need for Cognition Scale, which measures “the tendency for an individual to engage in and enjoy thinking.”
然而 West 及其同事感興趣的不僅僅是,重新證實存在於人類大腦裏眾所周知的偏見,而更多地是想弄清楚這些偏見與人類智力之間的聯繫。結果,他們在有關偏見的測試裏混進了各種不同的認知方法,包括了學習能力傾向測驗和認知需求規模的測驗,涵括了“每個人沉浸於或熱衷於思考的傾向”。
The results were quite disturbing. For one thing, self-awareness was not particularly useful: as the scientists note, “people who were aware of their own biases were not better able to overcome them.” This finding wouldn’t surprise Kahneman, who admits in “Thinking, Fast and Slow” that his decades of groundbreaking research have failed to significantly improve his own mental performance. “My intuitive thinking is just as prone to overconfidence, extreme predictions, and the planning fallacy”—a tendency to underestimate how long it will take to complete a task—“as it was before I made a study of these issues,” he writes.
這項研究測驗的結果讓人心緒不安:其中之一就是自知之明並非特別起作用——就跟科學家們注意的一樣,“那些有自知之明的人們卻無法克服自己的弱點。”這項研究成果如實地在 Kahneman 教授的預料之中,他在《思考,快與慢》裏承認自己幾十年來的開天闢地的研究,辜負了自己想提升內在心理修養的初念。“我的直覺思維還是更傾向於自大、極端評估以及計畫謬誤。”——這是一種過低估測完成一項任務所需花費的時間——“就跟我之前作出這些問題的研究一樣。”他在書裏這樣寫道。
Perhaps our most dangerous bias is that we naturally assume that everyone else is more susceptible to thinking errors, a tendency known as the “bias blind spot.” This “meta-bias” is rooted in our ability to spot systematic mistakes in the decisions of others—we excel at noticing the flaws of friends—and inability to spot those same mistakes in ourselves. Although the bias blind spot itself isn’t a new concept, West’s latest paper demonstrates that it applies to every single bias under consideration, from anchoring to so-called “framing effects.” In each instance, we readily forgive our own minds but look harshly upon the minds of other people.
也許我們最大的偏見是:我們總是自然而然地臆測他人更易進入思維雷區,這種傾向也被稱為“偏見盲點”。這種“思維偏見”根植於我們容易挑剔出他人邏輯上的錯誤——精湛于挑出朋友的缺陷——而卻無法找出自身相同的錯誤。儘管“偏見盲點”並非新穎的概念,可是在 West 主持研究的最新報告裏顯示出:在每一種正著手研究的偏見中——從“錨點偏見”到所謂的“框架效應”——在每種偏見的案例裏,我們總是容易忘記自己所想過的,而對別人所想的竭力吹毛求疵。
And here’s the upsetting punch line: intelligence seems to make things worse. The scientists gave the students four measures of “cognitive sophistication.” As they report in the paper, all four of the measures showed positive correlations, “indicating that more cognitively sophisticated participants showed larger bias blind spots.” This trend held for many of the specific biases, indicating that smarter people (at least as measured by S.A.T. scores) and those more likely to engage in deliberation were slightly more vulnerable to common mental mistakes. Education also isn’t a savior; as Kahneman and Shane Frederick first noted many years ago, more than fifty per cent of students at Harvard, Princeton, and M.I.T. gave the incorrect answer to the bat-and-ball question.
還有讓人啼笑皆非的噱頭是:聰明反被聰明誤。科學家們傳授給學生有4種“認知糾結”的方法。就如他們在研究報告裏說的一樣,這4種的方法都有絕對的關聯性,“指示了認知越趨複雜的參與者暴露出了更多的’偏見盲點‘部分”。這種趨勢適用於很多特定的偏見,顯示出越聰明的人(至少通過美國高考分數排行測定)以及那些更心細的人群都稍稍有點易受心理毛病的影響。教育並非救世主——這點 Kahneman 和 Shane Frederick 早在多年前就察覺到,有超過5成的學生,包括哈佛大學、普林斯頓大學以及麻省理工大學的學生在回答球棒與球這個問題時,都給出錯誤的答案。
What explains this result? One provocative hypothesis is that the bias blind spot arises because of a mismatch between how we evaluate others and how we evaluate ourselves. When considering the irrational choices of a stranger, for instance, we are forced to rely on behavioral information; we see their biases from the outside, which allows us to glimpse their systematic thinking errors. However, when assessing our own bad choices, we tend to engage in elaborate introspection. We scrutinize our motivations and search for relevant reasons; we lament our mistakes to therapists and ruminate on the beliefs that led us astray.
對於這種結果作何解釋?一種具前瞻性的假設就是由於對自己的看法與對別人的看法不協和而造成了這麼多“偏見盲點”人群。舉例說明,當我們掂量一個陌生人的不合理選擇時,會強制性地根據該人的行為來判別資訊:旁觀者清,這樣我們總是殷切地偷瞥他人不合理性的思維錯誤;然而每當評估自己不好的選擇時,我們只會執著於“自省”。我們審查權衡自己的動機並孜孜不倦搜尋相關的緣由;我們為理療師反應給我們的錯誤而惋惜,並反思致使我們偏軌的信念。
The problem with this introspective approach is that the driving forces behind biases—the root causes of our irrationality—are largely unconscious, which means they remain invisible to self-analysis and impermeable to intelligence. In fact, introspection can actually compound the error, blinding us to those primal processes responsible for many of our everyday failings. We spin eloquent stories, but these stories miss the point. The more we attempt to know ourselves, the less we actually understand.
此自省法的困難在於:偏見(非理性的根源)背後的驅動力基本上是潛意識的,這意味著自我分析和智力都對其無計可施。實際上,自省只能綜合這種錯誤,誤導我們沿襲那些致使每日不順利的因素。我們可以滔滔不絕地杜撰故事,可是那樣吹出來的大話都與題無關;我們試圖做到自知之明的欲望越強,我們所瞭解自己的部分其實越少。
原文出處 Originated from
Research Shows That the Smarter People Are, : The New Yorker
Why Smart People Are Actually Dumb
這是一個簡單的算術問題:購買一支球棒和一顆球要花費1美元10美分,已知球棒比球要貴1美元,那麼一顆球需花費多少?
The vast majority of people respond quickly and confidently, insisting the ball costs ten cents. This answer is both obvious and wrong. (The correct answer is five cents for the ball and a dollar and five cents for the bat.)
大多數人會迅捷而自信地回答說一顆球要花費10美分。而這樣的回答明顯是錯誤的。(正確答案是一顆球5美分,一支球棒1美元又5美分。)
For more than five decades, Daniel Kahneman, a Nobel Laureate and professor of psychology at Princeton, has been asking questions like this and analyzing our answers. His disarmingly simple experiments have profoundly changed the way we think about thinking. While philosophers, economists, and social scientists had assumed for centuries that human beings are rational agents—reason was our Promethean gift—Kahneman, the late Amos Tversky, and others, including Shane Frederick (who developed the bat-and-ball question), demonstrated that we’re not nearly as rational as we like to believe.
Kahneman 教授設計出一套簡單實驗且深遠地影響了大家的思維模式。數個世紀下來,當哲學家、經濟學家和社會學家認為人類是自然界中合理的管理者——源自於普羅米修士的禮物; Kahneman 、後來的 Amos Tversky 及其他不認同此說的人,包括 Shane Frederick (設計出棒球與球問題的作者)都提出人類與所謂的的‘萬物主宰’相去甚遠。
When people face an uncertain situation, they don’t carefully evaluate the information or look up relevant statistics. Instead, their decisions depend on a long list of mental shortcuts, which often lead them to make foolish decisions. These shortcuts aren’t a faster way of doing the math; they’re a way of skipping the math altogether. Asked about the bat and the ball, we forget our arithmetic lessons and instead default to the answer that requires the least mental effort.
當人們遇到一種不確定的情況時,通常都不會審慎時度或查找分析相關案例,恰恰相反地,人類的選擇乃由急功近利的心理活動所驅使,因而作出一些愚蠢的決定。這類走捷徑的心態並不算一種高效方法;這不過是跳過了精密的數學計算而草率結論罷了。
Although Kahneman is now widely recognized as one of the most influential psychologists of the twentieth century, his work was dismissed for years. Kahneman recounts how one eminent American philosopher, after hearing about his research, quickly turned away, saying, “I am not interested in the psychology of stupidity.”
儘管 Kahneman 是現在是公認的20世紀裏最具影響力的心理學家,然而他的學說成就卻默默無聞。 Kahneman 曾對一位聲名顯赫的美國哲學家詳細解釋他的研究成果,但這位美國哲學家一聽完他的研究報告就轉身離開,並說道“我才沒興趣聽這愚蠢荒誕的心理學。”
The philosopher, it turns out, got it backward. A new study in the Journal of Personality and Social Psychology led by Richard West at James Madison University and Keith Stanovich at the University of Toronto suggests that, in many instances, smarter people are more vulnerable to these thinking errors. Although we assume that intelligence is a buffer against bias—that’s why those with higher S.A.T. scores think they are less prone to these universal thinking mistakes—it can actually be a subtle curse.
可是後來的事實證明:那位哲學家完全想錯了。最近,一份由 West 和 Stanovich 合力撰寫的報告,被發表在《人格與社會心理學週刊》。報告稱聰明人在很多方面都更容易走進這種思維誤區。儘管我們認為:智力是反對偏見的制衡緩衝器——這也是為什麼那麼多 SAT 高分群,認為自己不容易犯常識性的錯誤——反而成為一種微妙的詛咒。
West and his colleagues began by giving four hundred and eighty-two undergraduates a questionnaire featuring a variety of classic bias problems. Here’s a example:
In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake?
West 和同事以隨機抽查4百82位大學生,進行多種導致偏見的經典問卷。以下是其中一個例子:在一座湖泊裏,生長著一片睡蓮葉。睡蓮葉的面積每天都增加一倍。假設這片睡蓮葉需要48天才能覆蓋整個湖面,那麼這片睡蓮葉將需要多少天才能覆蓋湖面的一半?
Your first response is probably to take a shortcut, and to divide the final answer by half. That leads you to twenty-four days. But that’s wrong. The correct solution is forty-seven days.
想必你的第一反應是儘可能地偷工減料,將最後的結果對半除,答案是24天,可惜這是錯的,正確答案是47天。
West also gave a puzzle that measured subjects’ vulnerability to something called “anchoring bias,” which Kahneman and Tversky had demonstrated in the nineteen-seventies. Subjects were first asked if the tallest redwood tree in the world was more than X feet, with X ranging from eighty-five to a thousand feet. Then the students were asked to estimate the height of the tallest redwood tree in the world. Students exposed to a small “anchor”—like eighty-five feet—guessed, on average, that the tallest tree in the world was only a hundred and eighteen feet. Given an anchor of a thousand feet, their estimates increased seven-fold.
West 還提出另一個謎題--關於測量主體的漏洞--稱為“錨點偏見”,由 Kahneman 和 Tversky 於1970年代提出。測驗者首先需知:假設世界上最高的紅木為X多英尺高,而這個 X 變數的可能幅度為 85英尺至1000英尺 。接著學生被問到這棵世上最高的紅木可能有多高。學生們傾向于在有效數值內設定一個基準點,然後再朝著最小數值的可能性估算——以 85英尺為基準——估算出的平均值可能是認為世上最高的數有 118英尺高;而若給出的基準數值為 1000英尺,那麼他們的估算結果可能會翻上7倍。
But West and colleagues weren’t simply interested in reconfirming the known biases of the human mind. Rather, they wanted to understand how these biases correlated with human intelligence. As a result, they interspersed their tests of bias with various cognitive measurements, including the S.A.T. and the Need for Cognition Scale, which measures “the tendency for an individual to engage in and enjoy thinking.”
然而 West 及其同事感興趣的不僅僅是,重新證實存在於人類大腦裏眾所周知的偏見,而更多地是想弄清楚這些偏見與人類智力之間的聯繫。結果,他們在有關偏見的測試裏混進了各種不同的認知方法,包括了學習能力傾向測驗和認知需求規模的測驗,涵括了“每個人沉浸於或熱衷於思考的傾向”。
The results were quite disturbing. For one thing, self-awareness was not particularly useful: as the scientists note, “people who were aware of their own biases were not better able to overcome them.” This finding wouldn’t surprise Kahneman, who admits in “Thinking, Fast and Slow” that his decades of groundbreaking research have failed to significantly improve his own mental performance. “My intuitive thinking is just as prone to overconfidence, extreme predictions, and the planning fallacy”—a tendency to underestimate how long it will take to complete a task—“as it was before I made a study of these issues,” he writes.
這項研究測驗的結果讓人心緒不安:其中之一就是自知之明並非特別起作用——就跟科學家們注意的一樣,“那些有自知之明的人們卻無法克服自己的弱點。”這項研究成果如實地在 Kahneman 教授的預料之中,他在《思考,快與慢》裏承認自己幾十年來的開天闢地的研究,辜負了自己想提升內在心理修養的初念。“我的直覺思維還是更傾向於自大、極端評估以及計畫謬誤。”——這是一種過低估測完成一項任務所需花費的時間——“就跟我之前作出這些問題的研究一樣。”他在書裏這樣寫道。
Perhaps our most dangerous bias is that we naturally assume that everyone else is more susceptible to thinking errors, a tendency known as the “bias blind spot.” This “meta-bias” is rooted in our ability to spot systematic mistakes in the decisions of others—we excel at noticing the flaws of friends—and inability to spot those same mistakes in ourselves. Although the bias blind spot itself isn’t a new concept, West’s latest paper demonstrates that it applies to every single bias under consideration, from anchoring to so-called “framing effects.” In each instance, we readily forgive our own minds but look harshly upon the minds of other people.
也許我們最大的偏見是:我們總是自然而然地臆測他人更易進入思維雷區,這種傾向也被稱為“偏見盲點”。這種“思維偏見”根植於我們容易挑剔出他人邏輯上的錯誤——精湛于挑出朋友的缺陷——而卻無法找出自身相同的錯誤。儘管“偏見盲點”並非新穎的概念,可是在 West 主持研究的最新報告裏顯示出:在每一種正著手研究的偏見中——從“錨點偏見”到所謂的“框架效應”——在每種偏見的案例裏,我們總是容易忘記自己所想過的,而對別人所想的竭力吹毛求疵。
And here’s the upsetting punch line: intelligence seems to make things worse. The scientists gave the students four measures of “cognitive sophistication.” As they report in the paper, all four of the measures showed positive correlations, “indicating that more cognitively sophisticated participants showed larger bias blind spots.” This trend held for many of the specific biases, indicating that smarter people (at least as measured by S.A.T. scores) and those more likely to engage in deliberation were slightly more vulnerable to common mental mistakes. Education also isn’t a savior; as Kahneman and Shane Frederick first noted many years ago, more than fifty per cent of students at Harvard, Princeton, and M.I.T. gave the incorrect answer to the bat-and-ball question.
還有讓人啼笑皆非的噱頭是:聰明反被聰明誤。科學家們傳授給學生有4種“認知糾結”的方法。就如他們在研究報告裏說的一樣,這4種的方法都有絕對的關聯性,“指示了認知越趨複雜的參與者暴露出了更多的’偏見盲點‘部分”。這種趨勢適用於很多特定的偏見,顯示出越聰明的人(至少通過美國高考分數排行測定)以及那些更心細的人群都稍稍有點易受心理毛病的影響。教育並非救世主——這點 Kahneman 和 Shane Frederick 早在多年前就察覺到,有超過5成的學生,包括哈佛大學、普林斯頓大學以及麻省理工大學的學生在回答球棒與球這個問題時,都給出錯誤的答案。
What explains this result? One provocative hypothesis is that the bias blind spot arises because of a mismatch between how we evaluate others and how we evaluate ourselves. When considering the irrational choices of a stranger, for instance, we are forced to rely on behavioral information; we see their biases from the outside, which allows us to glimpse their systematic thinking errors. However, when assessing our own bad choices, we tend to engage in elaborate introspection. We scrutinize our motivations and search for relevant reasons; we lament our mistakes to therapists and ruminate on the beliefs that led us astray.
對於這種結果作何解釋?一種具前瞻性的假設就是由於對自己的看法與對別人的看法不協和而造成了這麼多“偏見盲點”人群。舉例說明,當我們掂量一個陌生人的不合理選擇時,會強制性地根據該人的行為來判別資訊:旁觀者清,這樣我們總是殷切地偷瞥他人不合理性的思維錯誤;然而每當評估自己不好的選擇時,我們只會執著於“自省”。我們審查權衡自己的動機並孜孜不倦搜尋相關的緣由;我們為理療師反應給我們的錯誤而惋惜,並反思致使我們偏軌的信念。
The problem with this introspective approach is that the driving forces behind biases—the root causes of our irrationality—are largely unconscious, which means they remain invisible to self-analysis and impermeable to intelligence. In fact, introspection can actually compound the error, blinding us to those primal processes responsible for many of our everyday failings. We spin eloquent stories, but these stories miss the point. The more we attempt to know ourselves, the less we actually understand.
此自省法的困難在於:偏見(非理性的根源)背後的驅動力基本上是潛意識的,這意味著自我分析和智力都對其無計可施。實際上,自省只能綜合這種錯誤,誤導我們沿襲那些致使每日不順利的因素。我們可以滔滔不絕地杜撰故事,可是那樣吹出來的大話都與題無關;我們試圖做到自知之明的欲望越強,我們所瞭解自己的部分其實越少。
原文出處 Originated from
Research Shows That the Smarter People Are, : The New Yorker
Why Smart People Are Actually Dumb
0 Comentarios