The future of artificial intelligence depends on human wisdom

AI will grow global GDP by 14 percent, $15.7 trillion, by 2030, is all that money worth the threat that it poses?

Published November 17, 2018 10:00AM (EST)

Arnold Schwarzenegger as T-800 "Model 101" in "The Terminator" (Metro-Goldwyn-Mayer)
Arnold Schwarzenegger as T-800 "Model 101" in "The Terminator" (Metro-Goldwyn-Mayer)

Artificial intelligence, the capacity of a machine to imitate intelligent human behavior, now exists as a significant feature in our lives and is increasing rapidly in scale and scope. At its heart, AI poses the question of whether data can create significance and whether artificial intelligence can lead to wisdom and even consciousness. As a technological force, AI is inherently disruptive and creates an opportunity to rethink and restructure massive areas of human life, commerce, and culture.

The intelligence of AI is distinct from natural, human intelligence. Computers are alluring. Theirs is the ability to operate at fantastic speeds, completing tasks that would take humans hours in a fraction of the time. AI is capable of being precisely and tirelessly focused on a complex task with multiple inputs, with repetitive iteration, until it is completed, in a fashion of which humans are simply incapable.

Current AI development has created a conflict between science, business and ethics — what the technology is capable of, where it could go, or even whether its development should continue.

Those who venerate AI and its possibilities champion its ability to address and conquer tasks and realms that cannot be handled efficiently or effectively by humans. Google CEO Sundar Pichai has emerged as AI’s John the Baptist: “AI is probably the most important thing humanity has ever worked on. I think of it as something more profound than electricity or fire.” Facebook CEO Mark Zuckerberg is also optimistic about AI, particularly regarding self-driving cars. "One of the top causes of death for people is car accidents still and if you can eliminate that with AI, that is going to be just a dramatic improvement." Where the physical and intellectual capacities of humans are inherently limited, AI has the potential to add to that reservoir of capacity to improve lives.

Then, of course, there is the money. As a result of AI, it is projected that global GDP would increase by up to 14 percent in 2030, an estimated increase of $15.7 trillion, with the greatest gains to come in China (26 percent increase in GDP) and the U.S. (14 percent increase in GDP). Gartner predicts that by 2020, almost all new software will contain AI elements.

Huge amounts are already being invested by businesses that seek the efficiency gains and outsized accomplishments AI promises. Venture capital investment in AI startups grew 463 percent from 2012-2017. A McKinsey report noted that global demand for data scientists has exceeded supply by over 50 percent in 2018 alone. They are so coveted that some Chinese companies are reportedly hiring senior machine learning researchers with salaries above $500,000. According to Mark Cuban, by 2017 Google had incorporated  AI into its business model and generated $9 billion more as a result and Cuban also posited that the world’s first trillionaire would stem from the AI field.

Beneficially, AI provides a level of protection in the cybersecurity realm that is unfeasible for human operators. Gmail has used machine learning algorithms to protect its emails for the past 18 years, but it has required regular external updating, as it cannot yet iterate itself. Google also uses machine learning to weed out violent images, detect phishing and malware, and filter comments. This security and filtering are of an order of magnitude and thoroughness that no human-based effort could equal.

As these tasks are essentially beyond human capability, it does not significantly occur at the expense of human employment. In 2016, IBM posited that the average company faced over 200,000 security events per day, hundreds of which required human action to resolve. IBM now employs three AI engines simultaneously to evaluate new cyber-attacks against 600,000 previous incidents, allowing it to automate certain first responses that would otherwise require human intervention. With the emergence of cybercrime as the greatest new threat of the 21st century, AI may be similarly emerging as humanity’s best solution for it.

For those suspicious of AI, they see no limits to its disruption. They believe that its increasing power will be difficult to contain, and that AI could eventually threaten humanity’s well-being. Perhaps the most famous such skeptic was the late renowned theoretical physicist Steven Hawking, who stated that “The development of full artificial intelligence could spell the end of the human race….It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”

Some prestigious business leaders have echoed his concerns. Tesla CEO Elon Musk noted: “Mark my words — A.I. is far more dangerous than nukes.” And Microsoft founder Bill Gates stated: “I am in the camp that is concerned about super intelligence.” Beyond fears of sentience in AI are concerns about placing incredible technology in the hands of those with malignant intentions. There is already a movement among scientists to preclude the creation of intelligent autonomous weapons, which would be optimally deadly and effective in the hands of terrorists and unscrupulous state actors.

AI should also be examined through its effects on different strata of society. While investors and business people see tremendous opportunities in creating efficiencies through AI, less educated workers often see a fundamental threat to their employment.

Take truck drivers. Seventy percent of all freight in the U.S. is moved by trucks, making truck drivers a critical element of the U.S. economy. Currently, there are roughly 1.29 million U.S. truck drivers, being paid a median salary of $53,000, a strong salary for someone without a college degree. Furthermore, there is consistent undersupply of drivers given the demand for freight traffic, keeping salaries stable through intense competition between freight companies.

As labor accounts for up to 45 percent of total road freight costs, business people around the world are eager to use AI and autonomous driving to reduce their reliance on truck drivers for freight transport. If successful, this effect would be catastrophic for the non-college educated demographic. Fewer than four percent of U.S. truck drivers are below 25 years old, so this remains primarily an older group of workers, with limited alternative employment opportunities. One study predicted that by 2030, autonomous driving, if quickly deployed, could eliminate up to 4.4 of the 6.4 million total truck driving jobs in the U.S. and EU, a reduction of 69 percent.

This job destruction effect stemming from AI would not be confined to the U.S.. China’s trucking industry is currently worth $750 billion, carries 80 percent of all Chinese freight, and employs 30 million Chinese truck drivers. The social impact of an aggressive autonomous driving adoption in China would be savage.

AI has also begun to threaten the jobs of journalists, jobs that increasingly require graduate degrees. Seventy years ago, entry into journalism came via a vocational apprenticeship, while today it apparently can come through a sophisticated algorithm. The Washington Post published 850 articles in 2017 that were written by its robot reporter Heliograf, which included stories ranging from covering congressional and gubernatorial races to local football games. Reuters now utilizes an AI tool called Lynx Insight in its newsrooms to analyze data, suggest story ideas, and even draft some copy. Yahoo’s news efforts include AI-written coverage of sports and TV shows, drafted by a program called Wordsmith, whose parent company Automated Insights “wrote” 1.5 billion articles in 2015.

Despite all of this, AI still retains foundational limitations. Though AI has the ability to pursue and improve a designated “utility function”, something that it can be programmed to pursue, it is incapable of pursuing a “values function” and therefore understanding human values.

For example, if humans cannot define happiness accurately in an algorithm (and we have failed thus far), then it is computationally unfeasible for a computer to accurately reproduce it. Humans instinctively pursue hierarchical decision-making, prioritizing some things over others for abstract, value-laden reasons. Computers are currently incapable of hierarchical decision-making as they are exclusively directed by their programming, noting and measuring only what they are told. Thus, the many strengths of AI are counterbalanced by our its (and occasionally our) own considerable limitations.

The direction of AI’s development should primarily be determined from our own font of valuable, fragile, and hard-won human wisdom. AI evolving toward a more meaningful, human-like existence or migrating towards technological fears of fully unconstrained autonomy and consciousness, will depend on it exceeding the barriers we create for it. Ultimately, we still determine how AI should be controlled and regulated, now and for the future. We are capable of enlightened self-interest in terms of regulatory policy. With this in mind, to move forward properly around AI, we must exercise this capacity.

Unfortunately, enlightened self-interest is not a guarantee of comprehensive compliance. Decades ago, it was globally recognized that CFCs made hairspray cheaper but damaged the ozone layer. Thus, the 1987 Montreal Protocol was proposed and is currently affirmed by all 197 countries on earth. This month it was discovered that rogue companies in China were flouting Montreal by using CFC’s and threatening the ozone layer again. As hard as we try and even when every country on earth agrees, we cannot prevent some from transgressing and advancing a global threat for personal profit.

Over AI, the economic stakes are enormous, but so are the ethical and philosophical issues involved. Only extraordinary human science can scale the heights required to raise it to a threat. What stands between us and the Terminators may simply be a race between scientific accomplishment and human wisdom, and our constant quest to balance the two.


By Sam Natapoff

Dr. Sam Natapoff is the President of Empire Global Ventures LLC (EGV) where he helps companies scale both internationally and abroad and is a leading expert in international economics and business consulting. He has a Ph.D. in International Relations from George Washington University.

MORE FROM Sam Natapoff