OPINION | Superimposing agency on AI technologies will shift moral responsibility from its creators

0:00
play article
Subscribers can listen to this article
In a society where humans seem to constantly be intentionally and unintentionally interacting with AI technologies, collaboration may not be quite possible, writes the author. Picture: iStock
In a society where humans seem to constantly be intentionally and unintentionally interacting with AI technologies, collaboration may not be quite possible, writes the author. Picture: iStock

Our fear of AI is misplaced precisely because we ascribe human capabilities and wrongly assign them moral agency simply because they can appear and act in ways that can be interpreted through anthropomorphic frameworks, writes Mpho Tshivhase.


Late last year, the Academy of Science of South Africa (Assaf) hosted a presidential roundtable themed: "Frankenstein or Gods? The Impact of the New Technologies on What It Means to Be Human". I was part of a panel that engaged with political scientist Professor Margaret Levi's keynote address.

Levi expressed concern over the future and governance of artificial intelligence (AI) technologies, and her overarching analysis involved a consideration of the underlying social problems. She pointed to the problem of power inequality as it related to corporate monopoly, technical expertise of the programmers versus the lack of said expertise by the end user, as well as the dependence and addiction that leads to human concession of control.

Furthermore, she identified the weaponisation of technology, the biased nature of algorithms, legal and illegal data/identity theft, as well as AI technology taking over human jobs among the fear-inspiring advances of technology.

Levi argued that was impossible for AI to take human jobs because robots were "savant nerds" without emotional intelligence and self-consciousness. She maintained that the reservations that humans displayed in response to AI technologies pointed to the anticipation of machines dominating humans, which was met with a strong human desire to dominate AI.

Paradigm of collaboration

The fear of being dominated and the desire to dominate machines operates within the wrong paradigm of domination. She recommended viewing the relation between AI and humans using the paradigm of collaboration.

While I agree with Levi's conclusion that talk of domination is the wrong paradigm with which to capture the relationality between AI and humans, I am not convinced that the alternative paradigm is collaboration.

Apart from anthropomorphising AI, the language of collaboration is value laden and it invokes AI agency. It is a categorical mistake to think that AI possesses the kind of agency that enables collaboration with humans. Collaboration would require empathy, ethics, tacit knowledge and other forms of comprehensive knowledge, among other values that enable equitable participation among agents.

Levi rightly argued that AI technologies could not learn ethics, compassion and other forms of comprehensive knowledge.

They are instruments that cannot have agency in the way humans understand agency.

READ | Opinion: Balancing the perils and promises of technology

While they might have mechanical autonomy, AI cannot be said to be praiseworthy (gods) or blameworthy (Frankensteins). It is the designers and programme developers who can be gods or Frankensteins.

Herein I agree with Levi’s suggestion that AI research and design should not be left to technocrats alone – it should necessarily involve transdisciplinary collaborations that can create systems that respect and uphold ethical values that minimise the violation of human lives.

What we ought to be mindful of in our interaction with and use of AI technologies involves acute consideration of our ethical futures.

This will necessarily involve a culture change, which includes consideration of ethical and societal consequences related to the design and design research of AI technologies; planning and mitigating for expected harms and being able to conscientiously choose not to create a harmful product; and revising the culture of AI design by rethinking the faces of consequences.

All aspects of our ethical futures, as espoused by Levi, can be captured using three values:

1) inclusivity;

2) planning;

3) revision – all of which seem to co-opt human beings and not the AI technologies.

Certainly, humans are entities whose humanity is dynamic in ways that are not replicable in AI technologies.

Our fears about being dominated by AI are partly due to the overextension of the instrumental value of AI. We tend to confuse their instrumental value with complex human capabilities that stem from the dynamic aspects of human nature.

Our fear of AI is misplaced, precisely because we ascribe human capabilities and wrongly assign them moral agency simply because they can appear and act in ways that can be interpreted through anthropomorphic frameworks.

While their actions can have moral consequences, AI technologies remain amoral objects designed and created for specific tasks. They can be used and deployed for human benefit.

Ultimately, humans are the ones who need to collaborate to ensure ethically responsible use of AI, instead of giving it honorary human status.

Accountable 

Although we can grant AI technologies mechanical autonomy, those who design and create them should remain accountable for their actions and should collaborate across disciplines to mitigate abusive use of such technologies.

Observably, Levi's suggestion for humans and AI to collaborate raises issues of moral relationality and the plausibility of co-authorship of the principles that govern interactions between humans and AI.

READ | Opinion: Could AI bring a new day of reckoning for financial crime

In a society where humans seem to constantly be intentionally and unintentionally interacting with AI technologies, collaboration may not be quite possible.

What is possible is morally responsible use/deployment of AI technologies.

In short, I think we should be careful not to superimpose agency on AI technologies because it indefensibly shifts moral responsibility from its creators.

- Dr Mpho Tshivhase, is a senior lecturer in the Department of Philosophy at the University of Pretoria and the president of the Philosophical Society of Southern Africa.


*Want to respond to the columnist? Send your letter or article to opinions@news24.com with your name and town or province. You are welcome to also send a profile picture. We encourage a diversity of voices and views in our readers' submissions and reserve the right not to publish any and all submissions received.

Disclaimer: News24 encourages freedom of speech and the expression of diverse views. The views of columnists published on News24 are therefore their own and do not necessarily represent the views of News24.

We live in a world where facts and fiction get blurred
In times of uncertainty you need journalism you can trust. For only R75 per month, you have access to a world of in-depth analyses, investigative journalism, top opinions and a range of features. Journalism strengthens democracy. Invest in the future today.
Subscribe to News24
Lockdown For
DAYS
HRS
MINS
Voting Booth
When a Covid-19 vaccine for under 16's becomes available, will you be taking your children to get it?
Please select an option Oops! Something went wrong, please try again later.
Results
Yes, immediately!
38% - 4172 votes
I'll wait to see how others respond
26% - 2833 votes
No, I don't think they need it
36% - 4013 votes
Vote
ZAR/USD
15.42
(-0.71)
ZAR/GBP
21.29
(-0.41)
ZAR/EUR
18.33
(-0.41)
ZAR/AUD
11.85
(-0.43)
ZAR/JPY
0.14
(-0.71)
Gold
1702.14
(-0.28)
Silver
25.43
(+0.12)
Platinum
1134.50
(-0.22)
Brent Crude
70.77
(0.00)
Palladium
2337.84
(+0.42)
All Share
68112.75
(-0.23)
Top 40
62658.14
(-0.21)
Financial 15
12739.43
(-0.16)
Industrial 25
86706.03
(-1.04)
Resource 10
71369.15
(+0.80)
All JSE data delayed by at least 15 minutes morningstar logo