Social responsibility has to be an organizing principle of big tech, not just cheap talk

To avoid future backlash, big tech must anticipate how all products will not only be used, but abused and misused

Published October 20, 2018 10:00AM (EDT)

Facebook CEO Mark Zuckerberg (AP/Alex Brandon)
Facebook CEO Mark Zuckerberg (AP/Alex Brandon)

The distinguished British writer and scientist C.P. Snow said it best: “Technology is a queer thing; it brings you great gifts with one hand and stabs you in the back with the other.”

There is no question that technology has made our lives incomparably better; the most prominent result, that we live longer and healthier lives. But at the same time, it has created enormous problems. Indeed, it is now one of the greatest threats facing humankind.

A mounting body of research shows that the more a young person uses Facebook, the lonelier, more isolated, and depressed he or she is likely to be. Young people have become utterly obsessed with presenting an overly idealized portrait of themselves to the world, an image that cannot possibly survive a continuous barrage of other, even more-idealized portraits. Contrary to the hype, Facebook has made their lives worse, not better.

And that’s just the tip of iceberg. Facebook has allowed itself to be used for nefarious purposes. Some say its accessibility to hackers is woven into its underlying business model. Personal data is collected and sold without our awareness, let alone explicit permission, to third parties for Facebook's gain. The company has monetized every aspect of our being. Facebook provided a platform for fake news and hate speech by white supremacists and other extremist groups and allowed foreign governments to interfere with our elections. It’s nothing less than an extremely successful vehicle for the spread of dis- and misinformation.

To counter this assault, we need socially responsible tech companies. What then in broad outlines would they look like?

Socially responsible tech companies would take the dictum “Do No Harm” as a primary operating principle governing every aspect of their business. This means not only doing everything to prevent physical harm from its technologies, but also preventing harm to mental health and well-being for individuals and society as a whole.

Social Impact Assessments would be an integral part of the company’s philosophy and, most importantly, its day-to-day operations. The “stakeholders” that stand to be helped, or harmed, by a technology? Before any technology was released, teams composed of engineers, social scientists, parents, teachers, and kids would work to anticipate how a developing technology could be abused and misused, and then build in appropriate safeguards.

Those technologies where the harm clearly exceeds the benefits would be withdrawn voluntarily. In other words, they would not depend on government regulations to “do the right thing.” Crisis management would be a fundamental part of the basic development and subsequent use of every technology, from its very inception and would extend over the entire lifetime of the technology.

A major pharmaceutical company with whom I have worked serves as a model. To combat the ever-present threat of product tampering, it formed an “Internal Assassin Team.” This theatrically named team uses its intimate knowledge of a developing drug to look for weaknesses. The team's operating rationale is: Since we know more about our products than anyone else, how could someone infiltrate our operations and the retail stores where our products are sold, and thereby do the most damage? Based on the responses, they then ask themselves how they could combat those malicious attacks.

A socially responsible tech company undertakes this critical exercise because it puts the health and well-being of consumers ahead of short-term profits. Safeguarding becomes a central aspect of their corporate culture. Company profits follow precisely because they’ve put consumers first.

Despite objections that such measures are expensive and put a company at a competitive disadvantage, research shows that proactive companies are found to be significantly more profitable, experience far less of these crises, and recover faster from a crisis. In contrast, companies that are unprepared cycle from one crisis to another. Crisis management is not only the right ethical thing to do, but it’s good for business.

In addition to business ethicists, a socially responsible tech company must employ an additional set of experts; since all technologies affect or are used by kids, intended or not, experts in child development are a critical element of the company’s management. 

For instance, one company discovered that one of its key products was used by kids, even though they were not the intended audience. As a result, it hired an expert in child development to assure that the product was safe for kids.

None of these steps are perfect. There are no absolute ironclad guarantees that one will avoid all crises and disasters. Nevertheless, research demonstrates that such actions are highly effective in lowering a company’s potential for crises.

We cannot continue to dump the latest great creations on the world and afterwards clean up their worst aftereffects. Crisis management must be integral to the creation of all technologies and must be practiced across their entire lifespans. We have to do a far better job in anticipating how all technologies will not only be used, but fundamentally abused and misused. We have to do everything in our power to limit abuses and misuses. If not, the backlash against technology will spiral out of control and all will be the worse for it.

 

This essay is based on a recently published book, "Technology Run Amok: Crisis Management for the Digital Age," Palgrave Macmillan, 2018.


By Ian I. Mitroff

Ian I. Mitroff is professor emeritus of management and organization at the University of Southern California Marshall School of Business.

MORE FROM Ian I. Mitroff


Related Topics ------------------------------------------

Big Pharma Big Tech C.p. Snow Facebook