Blog: Should ‘Fairness’ be Irrelevant to Engineers?

Article By : Junko Yoshida

"Fairness" strikes many in the engineering community as a nebulous, uncomfortable topic. It seems to them irrelevant to what they do in technology development and hardware/software designs. But is it?

A couple of decades ago, I was working in Europe as a foreign correspondent for the U.S.-based publication EE Times. When I casually objected to a U.S. business practice I regarded as an invasion of privacy, a couple of industry analysts advised me: “In America, consumers are willing to trade their personal data if they can get something for free.” As far as privacy protection in the U.S. goes, they said, “That train left the station a long time ago.”

But has it?

Since the European Union rolled out its General Data Protection Regulation (GDPR) a year ago, the pressure to comply with GDPR has extended to U.S.-based data platform companies whose reach is truly global. Moreover, even within the United States, California last year introduced the California Consumer Privacy Act (CCPA), a bill designed to enhance privacy rights and consumer protection for state residents. This regulation, regarded by some as even stronger protection than that of GDPR, will take effect next January.

The public perception of privacy is rapidly evolving in the United States.

American consumers, perhaps belatedly, are waking to the price to pay when they give up on their privacy. By leaving their personal data in the hands of giant tech platform companies, such as Facebook and Google, who may or may not do the ethical thing, Americans are finding out that there is little recourse when personal data gets hacked, mined, sold, or even used by suspicious groups to tip the balance of an election.

In the era of big data, privacy laws are fast becoming a primary element in any digital security conversation. For companies whose business is built around consumer data, consumer trust is evolving into a vital part of their business model.

In contrast, AI’s “fairness” stands at a point where privacy discussions stood 20 years ago. It hasn’t exactly risen to the consciousness of many people. At least, not yet.


Recommended
When AI Goes Wrong


I realize that “fairness” strikes many in the engineering community as a nebulous, uncomfortable topic. It seems to them irrelevant to what they do in technology development and hardware/software designs.

But is it?

Some of our readers deemed the topic of AI fairness, discussed in an EE Times Special Project, as “social engineering.”

Many such comments imply that engineers are being asked to manipulate the technology (or game the algorithms or datasets), altering machine learning results for the sake of “politically correctness,” a term just as loaded as “fairness.”

Nothing could be further from the truth.

Fired by AI or killed by AI

The real issue is “bias,” insinuated into data sets that could skew machine learning results. An optimization strategy by training algorithms can then further amplify bias.

Think about those incomplete data sets that, for example, ignore people in wheelchairs or overlook construction workers in dayglo green jackets. AI algorithms trained on such datasets could mow down construction workers like tenpins and render wheelchairs more dangerous than unicycles.

AI’s inaccuracy, in this instance, can end up costing people’s lives. AI’s machine decisions are clearly “unfair” to guys in dayglo green and people in wheelchairs.


Recommended
Risk of AI Bias in Self-Driving


The market is eager for AI because every business is looking for ways to automate certain parts of their business. In pursuit of automation, we are beginning to cede to machines, wittingly or unwittingly, whole realms of decision making. These tasks include hiring, credit scoring, customer services and even driving.

To understand the unfairness of AI, perhaps, you need to imagine yourself on the receiving end of bad news triggered by a machine’s decision.

If a certain AI algorithm makes redundant the job of an employee of certain age, the unlucky worker is entitled to ask AI why he got canned. He also might wonder if the AI system his employer depended on was unwittingly designed to be unfair to a certain age group.

Once you feel wronged by machines, you’re likely to entertain a measure of outrage, perhaps even greater than you would if you were fired by a boss whom you know is a jerk.

Can an algorithm be a jerk?

Black-box algorithms

This question poses the disconcerting reality that every AI algorithm is a black box. With no clue about what the algorithms are doing – whether they are used by a social network behemoth like Facebook or operating in a Waymo robocar – makes everything about this brave new AI era opaque and uncertain.

In recent days, people have begun openly discussing that it’s time to break up Facebook.

I’m not sure that a breakup will change any business practices among our giant tech platforms, but one thing is certain. It’s not the users, but Facebook itself making arbitrary decisions on what pops up in our News feeds. Facebook operates under no regulatory oversight.

As Chris Hughes, who co-founded Facebook, wrote in The New York Times:

The most problematic aspect of Facebook’s power is Mark’s unilateral control over speech. There is no precedent for his ability to monitor, organize and even censor the conversations of two billion people.

Not everything rests on the shoulders of the engineering community. It’s on corporations, regulators, consumers and society as a whole.

But those who write algorithms and design systems should start thinking about consequences that seem, at first blush, remote from their tech labs – consequences like privacy protection and fairness. It’s both appropriate and overdue that we start talking about system development based on principles of “privacy protection by design” or “fairness by design.”

Subscribe to Newsletter

Test Qr code text s ss