Why We Have Had Enough of “Experts”

November 28, 2016

On June 3, 2016, Britain’s justice secretary, Michael Gove, made international headlines with his assertion that “people in this country have had enough of experts.” According to the Financial Times, Gove made that statement in response to the suggestion that no economic experts supported his stance in favor of Brexit (i.e., Britain’s exit from the European Union).

On its face, Gove’s statement was stupid and hypocritical. He and his fellow Brexit advocates continued to live in a society completely dependent upon experts in computer science, bridge engineering, airplane construction, and many other fields. In an advanced society, it seemed, one might as well assert that people have had quite enough of food, water, and oxygen.

And that’s how much of the media treated Gove’s statement: as stupid. For instance, The Guardian replied with a survey showing that 57% of Britons trusted academics, whereas only 11% trusted politicians (e.g., Gove). That was just a few weeks before the referendum — and guess what? The Guardian and its expert pollsters were wrong; Gove was right; and Brexit became a fact. But you aren’t surprised: you’re reading this after the large majority of experts also mispredicted the outcome of the Trump-Clinton election campaign in the U.S. a few months later.

The Washington Post (Witte, 2016) cited Gove’s later comments to clarify that Gove, himself, was referring only to expert advice from economists, but The Post also noted that majorities of Britons distrusted other authorities as well, including religious leaders and journalists. Clearly, Trump and Gove had detected sentiments that the polling experts missed.

Reasons to Reject Expert Opinion

So that’s the first reason why people have had enough of experts: those who claim to be experts sometimes have the degrees and the credentials but, in the particular case, it turns out they don’t know what they’re talking about. In this era of seemingly sophisticated marketing research, it is flatly pathetic that the world’s most prestigious pollsters would so badly misconstrue the public mood in two of the most important votes of 2016.

To take another glaring example from recent years, BloombergView (Smith, 2015) summarized the consensus view that economists substantially failed to predict the Great Recession that began in late 2007. That article quoted an earlier piece in The Telegraph (Pierce, 2008) wherein the Queen of England was reported as asking, if an event like the financial meltdown was so large, “How come everyone missed it?”

But of course not everyone did miss it. In a post on July 15, 2006, for example, I cited an S&P analyst who felt that the U.S. real estate market had not yet reached its bottom, and I suggested at that point (and elaborated in another post a year later) that a shakeout was coming. My post was obscure, but the view of that S&P analyst wasn’t: it came up near the top of my Google search. I’m sure economists could have found it, just as I did, if they had looked.

There are usually informed individuals on both sides of an issue. That’s why stock prices rarely go completely through the floor or the ceiling; it is unusual that everyone would be in complete agreement. What’s missing is not usually the alternative perspective, but merely the determination to find and consider it — to contemplate that, no matter what the issue may be, there’s a real risk that you’re wrong.

If you want to be right, you can’t get there by just being egotistical. You have to be prepared for scenarios very different from what you expected. So if you’re a newspaper like The Guardian, looking ahead to the Brexit vote, you aren’t supposed to peer down your nose at those who sympathize with the opposing camp, as that paper did in that case. You’re supposed to report both sides of the issue, with a deep commitment to fairness and accuracy, so as to keep the public apprised of the current state of intelligent debate. Likewise, if you’re the Queen, you don’t accept as gospel the words of your (allegedly best and brightest) advisors; and if you’re an American economist, you stay humble and keep asking yourself why some people are still predicting rocky times ahead.

The related point — the second reason why people don’t trust experts — is that experts have become emotional. The dispassionate stance just recommended is scarce these days. Somehow it became fashionable to hyperventilate. The expectation is pervasive. Newspapers feel it; people posting things on Facebook feel it. It seems you have to dramatize everything, if you wish to gain the attention of readers or viewers, and to satisfy peers who delight in finding new ways of disparaging the other side. Perhaps this is what we should have expected from the misguided effort to politicize everyone — to make everyone an informed voter on everything, when most people do not have the time, interest, training, and ability that would require. Perhaps we should have realized that the result would be a lot of people who are understandably angry about the half-truths they’re being fed.

The third reason why people don’t trust experts is that the experts have gone beyond emotion. Raw emotion ebbs and flows. There tends to be a time when you can break through to a person who is merely upset. But when ideology comes into play, there is no getting through to the person. And that is, sadly, the nature of much so-called expert thinking these days. Too often, and especially in the myriad matters that can be considered political, our experts are immune to reason. They already know what they want to believe, and thus become catastrophically clueless.

Another reason why people don’t trust experts is that the experts are increasingly corrupt. What often drives the ego, the passion, and the ideology is that that’s where the paycheck is. Quite commonly, you can’t get the PhD and be hired and promoted by trying to be honest and principled. You have to play the game. You have to manipulate and hurt people; you have to tell quite a few lies, about yourself and those around you and also, too often, about your professional work.

The politics and the economic corruption have generated yet another reason why people don’t trust so-called experts. In fact, experts aren’t what they used to be. As I have personally seen in such areas as the award of the PhD and the publication of papers, today’s academics often award expert credentials to people of average and sometimes even subpar intelligence and ability. Reasons for successful graduation, publication, and other academic kudos can include sleeping with the right person, having the right skin color, and spouting the right viewpoint. The general public will not be reading the pablum that these professors peddle, so the general public may not immediately realize that the putative expert is a dolt. But it may not take many encounters of any nature for the average intelligent person to recognize that these are not persons of high intellectual (or, for that matter, moral) standing.

There is also the problem that the nature of expertise has changed. The era of Leonardo da Vinci gave us the concept of the Renaissance Man, the polymath, the “person whose expertise spans a significant number of different subject areas” (Wikipedia). Not to deny that, even today, a person can be moderately well-informed in multiple subjects. But true expertise across many diverse fields is generally no longer possible. In every major area of knowledge, the torrent of potentially relevant reading material is beyond the capability of any one person. Significant achievements tend to require collaboration among multiple narrowly focused specialists.

And that’s great, when a team can be assembled and funded. But in everything else, we have the problem of a puzzle made of more and smaller pieces: it becomes harder to figure out how the whole thing is supposed to look. Ordinary people experience this in the frequent phenomenon of conflicting research reports. One study says that drinking wine or eating meat is bad for you; another says the opposite. This sort of thing has always gone on among experts who didn’t necessarily have any personal stake in the outcome, but much more of it is now communicated to people who feel a pressing need to know the truth. Expertise is demystified. The public sees, day after day, that the experts don’t have all the answers.

The Post-Truth Zeitgeist

There is one other important reason for the loss of faith in experts: the meaning of “truth” itself has changed. Experts were once treated, almost invariably, as the arbiters of truth. They were the ones who knew what science had proved. In the best (or at least simplest) cases, the experts still have that. And these are not negligible. This is the sort of truth that results in the smashing of atoms and the building of cities: hard, cold fact that you ignore at your peril.

The problem is that the foundations of that sort of truth are routinely rejected in many areas of life. For the reasons sketched out above, a person who is grossly uninformed about a given topic (especially in the social sciences) will no longer reliably assume that his/her own beliefs are inferior to those of an expert. In the words of Shaw (2016), “Using words and phrases that most people don’t understand in everyday conversation and through the media can be seen as an elitist attempt to assert intellectual dominance.” The perceived arrogance of experts in such situations encourages the search for a rationale by which the layperson can salvage his/her self-respect.

To the extent that the inclination to reach seemingly illogical conclusions is driven by a self-preservation instinct triggered by the expert’s style, it will not help to pile up additional arguments further demonstrating the layperson’s ignorance. Doing so may only enhance the mutual impressions that the expert is insufferable and the layperson is irrational. Instead, Shaw cites research by Hendriks et al. (2015) for the proposition that trust in experts requires a belief in not only their expertise, but also their integrity and benevolence.

Epistemologically, the perspective of the expert-rejecting layperson is not necessarily well explained by Quine’s (1951, p. 40) theory that “Any statement can be held true come what may, if we make drastic enough adjustments elsewhere” in the individual’s system of beliefs, so as to preserve that system’s internal consistency. Quine might have said that a person could believe that the Earth is flat; but to maintain such a belief, it would probably be necessary to reinterpret many other observations and beliefs (e.g., what about the viewpoint of astronauts?). The problem with Quine’s theory is that, on a practical level, most people (including most academics) are not actually trying to construct, in their heads, a logically consistent system of beliefs. Typical human thinking seems more like bricolage: an attempt, by the individual, to patch together his/her pieces of knowledge, so as to create something that sounds good enough to survive if not win the immediate dispute, or to answer and get past the question at hand.

Bricolage seems to be compatible with today’s embrace of crowdsourced knowledge based on the advice of peers. You take a bit of this and a dash of that, and you’ve got yourself a set of arbitrarily informed beliefs that jangle around rather loosely in your head. When those beliefs conflict, as they sometimes do, you’re more likely to suppress that fact than to explore it; there’s usually not enough time, information, and motivation for the alternative. This is, most likely, the way people have always been; it’s just that now they have more well- or poorly-informed voices telling them that their wildest imaginings might actually have some grain of truth.

To demonstrate a bit of sympathy with that modus operandi, I cite a newspaper article (Tett, 2016) that cites a somewhat random individual for the view that we now occupy a “post-truth” era — an era in which truth, for me, is what I heard from someone I believe or agree with. According to the Oxford Dictionaries, “post-truth” means that public opinion depends more on emotion and personal belief than on objective facts. The Washington Post (Wang, 2016) credits this post-truth inclination with the public impression that Hillary Clinton was less truthful than Donald Trump, in the 2016 presidential election campaign, even though he made far more factually incorrect claims.

Academia Started It

As many people have learned the hard way in the doctor’s or lawyer’s office, an advanced degree does not necessarily create a practitioner who possesses expertise, integrity, and benevolence (to cite the three crucial qualities of expert credibility identified by Shaw, above). The same is true for those laypeople who have encountered experts in other areas of life — experts employed by government or by big business, for example. They, too, cannot be naively trusted to reach accurate conclusions and to deliver appropriate outcomes.

In part, as described above, this means that the ideological commitments, the corruption, and the other problems of academia have degraded the quality of expertise, in the eyes of the general public. Even The Guardian‘s report that 57% of Britons trust academics is shameful, in the land of world-class universities like Oxford and Cambridge. When 43% of the British public displays significant doubt that their centers of learning deserve trust, one can reasonably infer that the university has failed to live up to the promises on which it was founded.

But there is another problem. Even if nations like Britain and the U.S. placed complete trust in their universities, epistemological corrosion would still be proceeding apace within the halls of academe. That is, the seeming ignorance of post-truth, crowdsourced “knowledge” may reflect — may, in fact, have been encouraged by — post-truth movements within the university itself.

Ideology affects not only the biases of individual academics (above) but also the nature of intellectual output. For example, Shields and Dunn (2016) discuss the extraordinary hostility to conservative perspectives in university social science departments. That hostility does seem related to academics’ persistent failure to recognize and understand the public attitudes that gave us the disastrous President George W. Bush and now, one fears, another looming disaster in President Donald Trump. In effect, the social scientists were post-truth before the public was, insofar as their ideology limited what they were motivated to believe or understand.

Ideology and/or corruption may also help to explain the failure of universities to prepare students for their careers and for the lifelong challenge of distinguishing truth from falsehood — which they must do in, among other things, elections. For example:

  • The Association of American Colleges & Universities (AAUP) commissioned a survey (Hart Research, 2015, p. 10) finding that 74% of students, but only 42% of employers, think that colleges and universities are doing a good job of preparing students for entry-level positions — and 64% of students, but only 36% of employers, think they are doing a good job of preparing students for career advancement or promotion.
  • The American Council of Trustees and Alumni (ACTA, 2016) found that, among the 1,100 most significant U.S. institutions of higher education awarding four-year degrees, only one-third impose basic requirements in at least four of these seven foundational subject areas: English composition, literature, foreign language, U.S. government or history, economics, math, and natural science. ACTA reported that nearly one-fifth of those institutions decline to require a single course in English composition.
  • In a recent study, Stanford researchers (2016) were “taken aback” by undergraduates’ inability to think critically about the reliability of clearly labeled “sponsored content” presented on agenda-driven webpages.

Such educational gaps would presumably be less common if university curricula were steered by concern for what students need to know, as distinct from what professors want to teach or what universities find most profitable or marketable. There is, in that sense, an epistemological indifference or, perhaps, a vague professorial faith that any sort of exposure to the college classroom will suffice to convey an inchoate awareness of how educated thinking differs from ordinary thinking.

Social Constructionism and Second-Wave Feminism

Unfortunately, many university departments have also contributed to the post-truth mentality on a more philosophically fundamental level. There has been a determined epistemological skepticism — a conviction, that is, that the very possibility of a truth claim is problematic if not nonsensical. To exemplify such views, Pratt (n.d.) cites Lyotard (1979, p. xxiv) for his “incredulity” toward important kinds of truth claims and Rorty (1986, p. 753) for the statement that “Nothing grounds our practices, nothing legitimizes them, nothing shows them to be in touch with the way things are.” Nor are these backwater oddities: Rorty, for instance, received his PhD at Yale, taught philosophy at Princeton and Stanford, and is widely recognized as a major 20th-century philosopher.

Hruby (2001, p. 54) indicates that such epistemological skepticism fed into a concept, widely adopted in academia, known as “social constructionism.” In social constructionism, everything is reduced to language, and it is assumed that language acquires its meaning arbitrarily, through social processes. Social constructionists believe, roughly speaking, that facts are only what we make of them. Boghossian (2001) offers examples of strange attempts to extend social constructionism into the natural sciences, such as Gergen’s (1989) feminist claim that “The validity of theoretical propositions in the sciences is in no way affected by factual evidence.” Hacking (1999, p. 96) explains that claim:

[Second-wave feminists] see objectivity and abstract truth as tools that have been used against them. They remind us of the old refrain: women are subjective, men are objective. They argue that those very values, and the word objectivity, are a gigantic confidence trick.

Hacking wasn’t the only one so reminded: the Stanford Encyclopedia of Philosophy (Anderson, 2015) summarizes critics of so-called feminist epistemology (many of whom are female) in these terms:

[Second-wave feminist epistemology] corrupts the search for truth by conflating facts with values and imposing political constraints on the conclusions it will accept. Truths inconvenient to a feminist perspective will be censored, and false views promoted because they support the feminist cause. . . . [Critics also] accuse feminist epistemologists of a corrosive cynicism about science . . . . [and contend that feminist epistemology] accepts traditional stereotypes about women’s thinking (as intuitive, holistic, emotional, etc.) . . . [when] there is no evidence that women all do think alike or that thinking in a “feminine” way reliably leads to truth. . . . Valorization of “feminine” ways of thinking may also trap women in traditional gender roles and . . . . [put women] into an intellectual ghetto . . . .

By contrast against that second-wave feminist epistemology, which became mainstream in the 1980s and 1990s, van der Tuin (2009) suggests that “Third-wave feminist epistemologists do not work according to a framework of diversity thinking nor . . . [do they] return to modernist identity politics . . . .” One can hope that the third wave does solidify into a movement more intent upon making epistemological sense.

Meanwhile, however, considerable damage has been done by social constructionism and by that type of feminist epistemology. Some (e.g., Payne, 2014, p. 18) now reduce social constructionism to a mere awareness that our interpretations of social situations tend to arise from the interactions of the participants. But that was not the version of social constructionism to which I was exposed, during years of graduate study in the overwhelmingly female profession of social work. For purposes of social work and other mental health professions, according to Leppington (1991, p. 71), social constructionism rejected the idea that people get useful knowledge of the world (and of psychological problems) through the scientific process of testing hypotheses. Instead, she said, social processes determine whether people accept and retain a given scientific perspective. Similarly, Atwood (1995, pp. 1-2, 13) summarized social constructionism as a reaction against assumptions of traditional psychology. Among other things, she said, social constructionism rejected the beliefs that

there is a singular truth and if we dig deeply enough we can discover it; [that] there is a search for “scientific” predictable essences and structures; . . . [and that the therapist should gather information] about the problem, its cause, its history, its frequency . . . . [Traditional psychology] further assumes that psychological qualities or emotional qualities exist as measurable entities . . . .

Of course, psychological qualities are, in fact, measurable: through the client’s answers to questions on psychiatric survey instruments, for example, and in his/her observed behavior and, increasingly, through neurological imaging. But a generation of mental health professionals was indoctrinated with the notions that those who care about women’s rights have to be second-wave feminists, and that second-wave feminists have to reject the scientific study of human behavior. Instead of scientific diagnosis, Atwood (1995, p. 14) suggests that mental health clients will “liberate themselves through dialogue” with the therapist. Certainly the social constructionists did well to borrow the client-centered approach of humanistic psychology (e.g., Rogers, 1953; see e.g., Barry, 2002, p. 78). But it is very odd to suggest that schizophrenia or personality disorders exist merely in language, or that scientific study of such disorders, and of their treatment, is somehow improper (see Lit & Shek, 2002, p. 119).

The influence of social constructionism and second-wave feminism were not limited to the mental health professions. Denzin (2002, p. 26) noted that “epistemological and ethical criticisms of traditional social science research . . . [have] made significant in-roads into many social science disciplines.” For him, social science needed to seek its grounding, “not in science . . . but rather in a commitment to a post-Marxist and communitarian feminism” (p. 30). Expanding beyond the scope of social science per se, Denzin criticized professional ethical codes, preferring “local solutions to such traditional ethical issues as the disclosure of confidential information . . . [and] end-of-life decisions” (p. 30). Evidently Denzin would prefer the decision of the village elders (if that is the arrangement that local people have adopted) over a uniform system of laws. He further suggested that, “If we are worried that our findings will be misused by conservative powers-that-be, we can help . . . frame the presentation” (p. 31) so as to have the desired political outcome. In Denzin’s view, appropriate social science research will not strive to provide an objective or neutral evaluation of sociopolitical issues; rather, it “will take sides” (p. 33), thereby potentially depriving clients, students, and the public of a fully informed understanding.

Scholars have criticized various manifestations of these views. For instance, The Federalist (Pullman, 2016) ridicules an education PhD student whose dissertation contends that science, technology, engineering, and math (STEM) departments are sexist because they use the scientific method, with its orientation toward seeking objective truth, when (in that student’s belief) that orientation favors male ways of thinking. To the contrary, Pullman argues that research into learning styles has soundly refuted the constructionist approach to education. In a different vein, Alan Sokal (1996) gave the world a celebrated hoax article, published in all seriousness by a Duke University journal, that he described afterwards (1996) as “a mélange of truths, half-truths, quarter-truths, falsehoods, non sequiturs, and syntactically correct sentences that have no meaning whatsoever.” Sokal (1998, p. 14) seemed to feel that his ability to get that ridiculous article published was, in itself, a demonstration of the intellectual bankruptcy of the anti-science mentality discussed here. Sokal (1996) also pointed out that the post-truth tendency of social constructionism and other “postmodernist intellectual fashions” could devastate oppressed minorities that depend upon verifiable truth to oppose power.

Contemporary Examples

Today, a post-truth mindset seems to pervade significant portions of many universities. Consider, for instance, the shibboleth of race. For some time, it has been politically correct to aver that the concept of “race” is nonsensical. For example, in a Huffington Post interview of Stanford’s Marcus Feldman, David Freeman (2015) suggested this: “[F]rom a biological standpoint, it doesn’t seem to make much sense to use the term ‘race.'” Feldman did not agree. He said, to the contrary, that most biologists did continue to distinguish people by “continental ancestry” — preferring that term over “race” for its slightly greater usefulness in our world of mixed races (e.g., one-quarter African ancestry, three-quarters Asian ancestry). Moreover, Feldman pointed out, the world is still very much aware of races (by whatever name), and an ability to speak frankly about race could assist those who oppose racism as much as those who favor it.

In a more overtly post-truth style, a LiveScience opinion piece by Hadjiargyrou (2014) insisted that “[T]here is only one human race.” Hadjiargyrou explained, in effect, that the dictionaries and the world’s English speakers were all wrong, and that they should simply be redirected to his preferred view that “we are all brothers and sisters.” His motives were laudable — he expressed hostility toward “misuse of the word race” — but his vague peacemaking hope conflicted with the realism that one might expect from the hosting website that called itself “Live Science.” Contradicting Hadjiargyrou, Time (Wade, 2014) cited research in which “Analysis of genomes from around the world establishes that there is a biological basis for race, despite the official statements to the contrary of leading social science organizations.” Time agreed with Hadjiargyrou’s view that all humans share the same set of genes, but noted that specific genes have been changed, in recent millenia, in the three principal races (African, East Asian, and Caucasian, the last including peoples from India through the Near East to Europe). Those genetic changes reportedly control “not only expected traits like skin color and nutritional metabolism, but also some aspects of brain function.” Hadjiargyrou thus joined the many politically correct academics who have sought to put everyone on the “euphemism treadmill” — a term that Pinker (1994) invented to characterize the unhelpful exercise in which

People invent new “polite” words to refer to emotionally laden or distasteful things, but the euphemism becomes tainted by association and the new one that must be found acquires its own negative connotations. . . . Give a concept a new name, and the name becomes colored by the concept; the concept does not become freshened by the name. . . . Using the latest term for a minority often shows not sensitivity but subscribing to the right magazines or going to the right cocktail parties. . . . Many people who don’t have a drop of malice or prejudice but happen to be older or distant from university, media and government spheres find themselves tainted as bigots for innocently using passe terms such as “Oriental” or “crippled.”

Pinker cited Orwell (1946) for his criticisms of “euphemisms, cliches and vague writing [that] could be used to reinforce orthodoxy and defend the indefensible.” In another post, I offer “hate speech” as an example: certain expressions are called “hateful” even if there is no actual hatred. Underlying that absurdity, there seems to be a reasonable intention to oppose disparaging speech. But “disparagement” doesn’t sound so evil. It doesn’t punch hard enough. So we call it “hate” speech instead, to let everyone know it’s really bad. And thus, in service of our selfish desire to be seen as Good and True, we find ourselves punching, really hard, at the kind of person Pinker described — the old person, the hick, the individual who for whatever reason is not up-to-date on our latest euphemisms, even if that person’s actual intentions were less hostile or hateful than our own.

Let there be no doubt that the post-truth mentality does serve terrible causes in the name of political correctness. Consider, for example, another LiveScience article, by Hanink and Silva (2016). In that article — to which I replied with a critical comment — the authors display their own preexisting antipathy to the idea that ancient Greek civilization could have influenced ancient Chinese culture. The authors are so keen on being anti-European that they ignore — indeed, they seem to encourage — the possibility that the Chinese scholar who originally endorsed the prospect of Greek influence may have been subjected to political pressure, by the Chinese government, to change her tune and express a view she considered false. Not that Orwell himself would have found that surprising.

Summary

In 2016, Britain’s Michael Gove expressed a hostility toward experts that proved accurate, not only in the Brexit context he intended, but also in that year’s later presidential contest in the United States. Although intellectuals were inclined to treat such seeming anti-intellectualism as not only personally threatening but plainly stupid, it appears there were a number of factors in its favor — that, indeed, the real stupidity may have been that of the intellectuals themselves, whose arrogance and complacency overlooked many years of prior indications that the masses were not necessarily buying what the chattering classes were selling.

That intellectual stupidity appears to result from several forces undermining the erstwhile stability upon which the university-centered elite depended. Those forces include deterioration of the quality of alleged expertise, as expert credentials have become cheaper and more visibly incommensurate with the supposed expert’s limited competence. Simply put, in a number of fields, too many so-called experts have become egotistical, inept, ideological, political, and/or corrupt. It has not helped that knowledge has become so refined and specialized, to the point that many truly brilliant and expert individuals are readily perceived as hopelessly narrowminded and out-of-touch with realities of the ordinary person’s life.

Further contributing to that sense of intellectual isolation from reality, the expertise project has been shaken by a post-truth tendency to which universities themselves seem to have contributed. Too often, academics (especially in the humanities and social sciences) have been unable to resist the appeal of a putative epistemology that would excuse their own scientific, mathematical, and technological illiteracy: they have considered themselves privileged to attack science in the name of ill-conceived claims favoring vague social awareness in lieu of precisely managed study. That unfortunate attempt at an alternate epistemology has encouraged the divisiveness and elitism of politically correct speech, a regrettable conflation of feminism with irrationality, a weakening of the force with which truth can oppose power, and a largely undeserved disparagement of earnest and demonstrably valuable scientific learning.

It would be incorrect to say that the world no longer needs experts. Under the circumstances, however, it would be eminently reasonable to suggest that we have had quite enough of the concept of expertise that has given us these various afflictions. If universities cannot get serious about cleaning their own house, so as to restore their internal rigor as well as the public’s confidence, then the events of 2016 suggest that politicians, the market, and the public will eventually take care of it for them.

Advertisements

One Response to “Why We Have Had Enough of “Experts””

  1. Ray Woodcock Says:

    A friend replies via email:

    As you allude to at various points, but which I more forcefully claim, the problem is not experts, but what are presented as experts, and what we accept as experts. Fox News is replete with experts who know nothing about the fields they expound upon (climate change being a perfect example-their expert has never so much as taken a course in meteorology, and shows an alarming ignorance of the subject). People have no way of knowing this without expending a fair amount of effort.

    Secondly, experts disagree. You can find an expert who will tell you whatever you want to hear. And that doesn’t mean that they are necessarily dishonest or beholden, it’s just that in most disciplines there are no absolute answers (not even physics, to use your example). This is why I almost always go to the consensus opinion, of which there usually is one.

    When there are differing opinions, some of them are more credible than others, which is where a lot of problems arise. There are experts who will tell you that the earth does not revolve around the sun (I kid you not – there is a NOVA-like documentary on it from Lions Gate you can google, for instance), or that Obama’s fiscal policies will lead to runaway inflation (looks like that didn’t pan out). And Wade doesn’t understand the concept of race even at it’s most basic, not to mention its intricacies (but he does have a loyal following among anthropologists and other social scientist types).

    I’m endlessly annoyed by the experts being trotted out to mouth this or that. And I have to say especially annoyed by the likes of Fox News, the little of it that I’ve seen, because it is so blatantly biased. Most information shows try to find two viewpoint that are as extremely opposite as possible, and then have them fight it out. The end result is that there is no useful information that is gained. A presentation by an expert who was familiar with the consensus view, or two people moderately on either side of the consensus, would provide actual information.

    On the other hand, the lack of foresight in the stock market plunge seems to have been due in large part due to the herd mentality of the players, with everyone deciding that everyone else was right, rather than thinking for themselves (discussed in The Wisdom of the Crowd, for example). Oh boy, just what you want from the people managing your money.

    Enough pontificating from the expert on experts.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: