July 27, 2018

Source: Bigstock

The grim corollary of what I have called human nature’s difficult need for esteem is that nothing is safe from resentment. For so vital is human pride, and so powerful our concomitant urge to be esteemed by others, that there is no area of human affairs to which people will not apply their resentment-driven reasonings, evaluations, and judgments.

Indeed, as traditional sources of deep meaning—religion, the family, fulfilling work—all dissipate, men and women, in their boundless ennui, come to scrutinize things that in previous eras seemed either unimportant or else not worth the trouble.

It is chiefly resentment that motivates this new scrutiny. We must understand, however, that resentment is as subtle and insidious as the devil himself. A deft and incomparable liar, both to others and himself, the resenter smuggles in his poisonous aims under various lofty notions: fairness, justice, equality, rights, diversity, inclusion…

Thus, “AI can be sexist and racist—it’s time to make it fair,” claims an article in the current issue of Nature, which is the world’s preeminent scientific journal. The authors, James Zou and Londa Schiebinger, bizarrely ask: “Should…[artificial intelligence] data be representative of the world as it is, or of a world that many would aspire to?” The curious and highly questionable logic employed in much of the article leaves us with no doubt about where the authors themselves stand on this question.

“Steps should be taken,” they write, “to ensure that…data sets are diverse and do not under represent particular groups. This means going beyond convenient classifications—‘woman/man,’ ‘black/white,’ and so on—which fail to capture the complexities of gender and ethnic identities.”

“We must understand that resentment is as subtle and insidious as the devil himself.”

Is sexual dimorphism (‘woman/man’) merely a “convenient classification”? Do the authors mean to endorse the view that gender identity is altogether subjective, determinable by one’s shifting feelings or inclinations? If so, that would be quite dubious, particularly for Zou, who, unlike Schiebinger, is a scientist. It is one thing to understand that a certain kind of prejudice—for instance, “nursing is a woman’s job”—is nothing but a social construct. It is quite another to hold that gender identity does not, in many fundamental respects, derive from sex or biology, a view that no credible scientist or scholar maintains.

And how are the complexities of people’s identities to be better represented? Let’s suppose for argument’s sake that AI is to represent me, Christopher DeGroot. For though a white man is certainly not what the authors have in mind by “under represent[ed] particular groups,” my example suffices to show the difficulties Zou and Schiebinger face. Would it be sufficient for AI to “know” that I’m American? Yet my last name denotes my Dutch ancestry. My ancestors were also French and Italian. Should all this complexity be represented? If it’s not, would I be wronged in some sense? What makes the representation satisfactory? And who gets to decide?

The authors’ “solutions” don’t address the insuperable problem here; namely, that there are no objective criteria to which I and those who disagree with me can appeal to settle our dispute. In other words, there is free rein for the most arbitrary judgments, and for endless resentment. Although Zou and Schiebinger seem well aware of the epistemic opposition between AI data and moral value, judging by their rather leftist article, they have devoted little thought to justifying the latter.

Little wonder, for at bottom they are typical academic frauds. Accordingly, they advocate “essentially nudging the machine-learning model to ensure that it achieves equitable performance across different subpopulations.” Just what does that mean? Let me quote from the article Zou and Schiebinger reference here: “the demographics of the set of individuals receiving any classification are the same as the demographics of the underlying population.”

Called “statistical parity,” in many instances this practice would “be [most un]representative of the world as it is,” but it would certainly represent “a world that many [resentment-pipers] would aspire to.” For this entails that the subpopulation blacks, for example, not be represented as committing more than half of all homicides, because otherwise their classification would not be “the same as the demographics of the underlying population.” Truth is to be subordinated to “equitable performance across different subpopulations.”

The absurdity of this should now be clear. Let us make it clearer still. Most homicides are committed by men. Should AI representation of homicide be manipulated “to ensure that it achieves equitable performance” with women? Would such manipulation preclude the resentments of men, and therefore be just? Following the authors’ logic, or rather, moral values (as I understand them), let’s suppose the answers are yes and yes. Well, now the problem is that AI can hardly represent homicide at all.

To be sure, AI is essentially utilitarian. Its practical purposes require accurate representation of reality. How can this be done if science is corrupted by ideology, namely, resentment?

“Our societies have long endured inequalities. AI must not unintentionally sustain or even worsen them,” Zou and Schiebinger write. But having shallow minds, they assume that disparity in itself demonstrates moral evil (“inequality”). So, they are vexed that

Wikipedia…seems like a rich and diverse data source. But fewer than 18% of the site’s biographical entries are on women. Articles about women link to articles about men more often than vice versa, which makes men more visible to search engines. They also include more mentions of romantic partners and family.

Now, it may well be true that Wikipedia has not given sufficient space to noteworthy women. In any case, it is not evident that “statistical parity” would ensure “equality” here, for it is not evident that women deserve to be as equally represented as men. Needless to say, in this context, the idea of deserving depends on achievement or some other form of noteworthiness. Nobody is noteworthy because of his or her gender.

Likewise, that “articles about women link to articles about men more often than vice versa” is not obviously wrong, because again, for all Zou and Schiebinger know, men may simply be much more noteworthy than women. I don’t claim men are; the point is that only thorough inquiry can establish a reasonable position either way. Zou’s and Schiebinger’s facile assumption of “inequality” is worse than useless.

Columnists

Sign Up to Receive Our Latest Updates!