Post-ChatGPT, we must consider human purpose beyond work
On one side of the conflict, generative AI (e.g. ChatGPT) and other forms of artificial intelligence promise massive productivity and corporate profits, and on the other, confusion, mistrust and the probability of loss of power and control by the masses.
Generative AI is a foundational technology. It refers to AI that can generate original audio, code, images, text, videos, speech and more. AI has become more tangible to the common man and its impact on jobs and life is more visible. Generative AI is making room for itself in the realm of creativity — which was historically monopolized by humans. The technology utilizes mass inputs (ingested data) and experiences (individual interactions with users) to create a base of knowledge. It then “learns” new information constantly in order to generate entirely new and novel content.
Some call the ChatGPT-like tools the new frontier for a gold rush. According to research, “AI could take the jobs of as many as one billion people globally and make 375 million jobs obsolete over the next decade.” But on the other hand, it can generate over $15.7 trillion by 2030. From 2017 to 2022, venture capital investment levels in early-stage generative AI companies have quadrupled and the investment growth expectations are significantly higher for years to come.
The reach and impact of generative AI could be bigger than the internet, cell phones and cloud computing. Its potential is more comparable to the invention of hunting tools, the wheel and the alphabet. It can influence our society and behavior more significantly than the industrial revolution or the Renaissance.
But I question if we are ready to meet the challenge.
Machines that can go across most industries and functions, provide novel content and work quickly and more knowledgeably than humans challenge people’s power and social worth. The entity that has the speed and capacity advantage, that can gain unlimited access to all human-generated information from day one and can get smarter faster than any individual is powerful.
The existentialist question becomes why am I here and what is my purpose if not going to work from 9 to 5 to earn a living? Would I need to serve the machine in the future and how would I make a living?
Elon Musk predicts that AI-driven technologies could power the workforce in the future, saying, “There is a pretty good chance we end up with a universal basic income, or something like that, due to automation.” Does that mean in a few decades each company will only have one customer — the government? Won’t that challenge the fundamentals of capitalism or at a minimum require an entirely different social safety net?
We are entering an era of “abnormal” that requires different thinking both at an individual and a societal level.
Sam Altman, the maker of ChatGPT, reportedly said the “good case [for A.I.] is just so unbelievably good that you sound like a crazy person talking about it.” He added: “I think the worst case is lights out for all of us.”
Some fears are indeed justified and not entirely unfounded, others are rooted in our inability to see a future that is not necessarily an extension of the past.
AI machines learn from humans’ past behaviors and decisions (data); they also inherit our biases. So, if machines can act and learn faster, they will potentially magnify our systematic biases. The biases that drive fake news and divisions. The biases that impact how we judge and treat each other. Biases that may drive wars, famine, racism, sexism and more. So, unless we face our biases, we may be looking at a future that is much more divisive as machines act on our behalf.
But should we fear ourselves and our biases or the machine that is only replicating them?
Concerned with cheating, schools are pushing back on students’ use of ChatGPT. The Department of Education in New York City as well as officials in Seattle, Baltimore and Los Angeles are also concerned with plagiarism. Is backing off of the use of generative AI legitimate, or is it time for schools to get our students to learn to apply their talents and use technology differently?
Some of my fellow professors at the University of Southern California conducted very informal research and concluded that ChatGPT can answer exam questions for undergrads to an A-level. The challenge is if the basic questions can be addressed by machines, should we not re-think what we are asking students to learn and how? If we have cars to drive us around, should we still train horses for transportation purposes?
No doubt, we need regulations that shield us during this large-scale worldwide change. Regulations that guide us toward partnership with the machines and not censorship of their capabilities and promises. We also need corporations to be alert to biases and aware of possible rogue behavior by machines.
But most importantly, we need a global mind shift that gives us all the courage to leave the past behind and embrace a future of flux.
It is time for massive change and growth. A time to think differently about our future and our relationship with machines. As opposed to viewing the relationship from the lens of slave and master, we should look at it from a partnership perspective. Indeed, guard rails are needed, but machines will only replicate our biases, and students only cheat if we measure them by what they have memorized or predefined procedures.
We should have the courage to let technology take over mundane processes and let machines coordinate routine future actions. Then, we will have the opportunity to conceive our next future. A future that relies on our collective mental evolution. A future that offers us the luxury to concentrate on innovation and creation. A future that we have not even imagined or are at all prepared for.
The bottom line, we are entering an era of “abnormal.” An era that offers a fundamental change in our evolutionary path — from physical to mental. There will be unprecedented challenges to overcome, from the way we make a living and receive healthcare to our expectations of the government. From the way we buy, sell, travel and learn to the way we spend our days, define intellectual property and seek legal protections.
Sid Mohasseb is an adjunct professor in Dynamic Data-Driven Strategy at the University of Southern California and is a former national strategic innovation leader for strategy at KPMG. He is the author of “The Caterpillar’s Edge” (2017) and “You are not Them” (2021).
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.