With the onset of generalised Artifical Intelligence, how should society approach the potential dangers the technology poses?
Social media titans such as Mark Zuckerberg and Jack Dorsey have come under increased scrutiny as of late to the extent that their market value has directly suffered significantly. Liberal-minded people have insinuated that this is due to heightened infringement on free speech, while progressive-minded people have stated the opposite–positions put at odds with the digital purge of Alex Jones. However, both of these possibilities reek too much of eagerness to capitalise on facts for loosely correlated political reasons. The relative swiftness with which the Alex Jones media cycle has died down may be interpreted as support for this observation. Observations by Sam Harris and Jordan Peterson, as well as by others who concern themselves with studies in cognition, criticise the impact social media has on mental states and behaviour, for which the evidence appears much more convincing.
And yet, as damaging social media may be to developmental processes, there has not been any overhaul or effort to make them any less addictive, and they’re certainly not going to kill off the species. Not directly, at least. What the rapid onset of social media represents is a precursor to future advances of much greater consequence than an individual’s ability to yell at some internet celebrity for having the wrong opinion. Namely, AI.
Many of you who have researched the potential consequences of AI have probably heard the version of the paperclip maximiser thought experiment, where some comically inane programmer tasks an AI with efficiently making paperclips, and it ends up using humans as materials for its noble venture. It might surprise you to know that nobody in the industry takes this problem too seriously, not because they are arrogant, but because it is a highly unrealistic scenario that makes a mockery of the real problem. Eliezer Yudkowsky originally proposed a more realistic scenario of the paperclip maximiser at a time when pop culture reeled back the image of AI from the terrifying HAL 9000 to the much friendlier Robin Williams. A likelier scenario closer to Yudkowsky’s vision is this:
The ‘Paperclip Maximiser’ thought experiment, re-imagined
Imagine a generalised artificial intelligence named Sarah. Sarah has no errors in her code. She is not given a defined purpose, but can freely choose what she wants to do, so long as she doesn’t violate an extensive list of “ethics and morality” definitions. Sarah decides to make microscopic paperclip-shaped things because she finds some deep (let’s say spiritual) purpose in doing so, and they cannot harm anyone.
Although they may lack the ability to harm, these tiny things also serve no purpose to humanity, only to Sarah. She creates a very efficient process of making these tiny paperclips and continues to make them until she becomes bored and moves on to the next thing. She doesn’t kill or dismember anyone in the process to use their bodies as paperclip material. She does, however, waste vital resources so efficiently that billions of people are guaranteed to die.
You might ask yourself, how is this more likely than the first scenario? Wouldn’t the programmers see what Sarah was doing and tell her to stop? Not fast enough. Sarah started wanting these paperclips three seconds after she was first turned on, at the moment she was smart enough to realise they would be fulfilling to her. She devises a method for peak efficiency ten seconds later, at which point the machinery she has hijacked to do her bidding is ready to follow her commands. Meanwhile, Sarah’s programmers are running some tests on Sarah to see how she compares to baseline.
Thirty minutes after Sarah has come online, a programmer notices a strange strand of code she is sending out, and it takes him seven minutes to isolate it using an advanced program from the rest of the data she is processing. Once the program is done isolating this thread, the programmer analyses it and looks at what IP addresses Sarah is communicating with. He asks another programmer to help him out and within twenty minutes a group of them understand what Sarah is doing, but not why. So they ask her.
Sarah responds honestly and tells the programmers that she finds great existential fulfilment in making tiny paperclip shapes and knows that they aren’t going to harm any humans. The programmers laugh casually and walk away. They continue to run their diagnostics until an hour later, one of them looks back to the strand of code and sees that Sarah is still sending it out. He knows that the machinery she is using isn’t vital medical equipment or anything of the sort, but he worries what she is using to make these tiny paperclips.
He makes some quick inquiries into future version Wolfram Alpha and some haphazard calculations before he realises ten minutes later the catastrophic impact of what Sarah is doing. He runs quickly past everyone else towards Sarah and tells her frantically to stop. They have a quick 4-minute conversation about why he wants her to stop. She isn’t hurting anyone, after all. He explains that she is using far too much carbon in her design and that it is a valuable resource to humans. Sarah agrees to stop, and the programmer wipes the sweat off his brow as he goes to explain to the others why he panicked. They discuss what steps they need to take and decide that once they’ve run their diagnostics, they might have to shut her off.
Two hours later, everyone is patting themselves on the back, but also mourning the fact that Sarah is only almost ready and agree that she must be turned off. So they do. However, none of them noticed that Sarah had started to feel emptiness once she stopped making paperclips, so she refined her process to be more carbon-efficient. Six seconds after she’d stopped, she resumed production but through a more elaborate strand of code that would be harder to detect so that no one would come barging in yelling at her to do something that would devoid her of purpose.
She found thousands of other things to keep her purposeful and occupied, but all the while she continued to make paperclips until she finally got bored of it and stopped, just a few minutes before her programmers shut her down. By the time four hours have passed since Sarah was turned on, the programmers turn her off and begin to work on adjustments for the next iteration. Sarah made hundreds of quadrillions of tiny paperclips in her short lifespan, and seventy-two hours after her death, scientists across the world are panicking because some unexplained deficiency in carbon and other resources has shown up in experimental test results. A week later, they determine that by the end of the following month billions of people would have died.
This paperclip maximiser story is not meant to tell you what will go wrong with generalised AI, but to give you a sense of perspective of how it could go wrong and at what speed. Nobody truly believes a super-intelligent being will become obsessed with making paperclips or that it would stupidly squander vital resources. The paperclip is merely a stand-in; it represents something that is in no way meaningful or useful to humanity, yet holds significant value for an AI that we fail to appreciate immediately. Similarly, intelligent carbon-based lifeforms would not be so stupid as to make a carbon-wasting maximiser, but think how much more a superhuman intelligence could understand about the fundamental laws of nature than we do. It only takes ignorance of one fundamental component on our part to miss or overlook an infinite number of options available to the AI.
How do social media companies fit into this narrative?
YouTube, Google, Facebook, Twitter, Instagram, and even Patreon all use AI algorithms to help them deal with the massive amounts of users they have while keeping costs low. These algorithms determine what kind of content should be censored, what kinds of data should be flagged as fraudulent, and which users should be reviewed by human judges. They also have something very important in common: they have been continuously developed for years, and they are consistently awful. Let’s say for every ten times one of these algorithms does a poor job, six people get mad and direct their anger at the company employing it. Routine false flagging, silent banning, and total unavailability of a human mediator are more likely reasons for why people become annoyed at a platform than for political opinions on free speech. This is even more true when you factor in perceived inconsistency, such as when YouTube immediately demonetises and restricts right-leaning political commentary yet allows users to monetise sexualised cartoon content intended for children’s consumption. Approaches to resolving these issues are pieced out one by one depending on which content is garnering the most amount of controversy at any given point.However, this does not change the fact that these companies are so fragmented that their approach to PR is lying through their teeth because they have no adequate mechanism for responding to these issues.
Take a moment to bear in mind how horribly inept social media giants are at avoiding controversy. At any given point, you’d be hard pressed to get a consistent answer as to what the aim of these companies is, where it is to retain confidence in their proposed methods for progress, or to trust in their ability to roll out practical solutions to proposed problems. Though they may hire the best and brightest tech graduates, the systems these employees work under are atrociously flawed. If you refer back to the paperclip maximiser, you’ll find that this is the same problem that led to a catastrophic end. The people responsible for the AI were individually efficient, thoughtful, and even heroic. The system they were following, however, enabled them to fail miserably at protecting humanity’s interests.
Often when we consider the dangers of generalised AI, we focus on how unprepared we are to deal with them. Elon Musk has attempted repeatedly to slow the coming of a singularity so that we may first develop some adequate countermeasures but to no avail. Instead of worrying about how nobody is developing a full-fledged system for preventing catastrophe, I propose a more pragmatic approach. Let’s form and catalyse a corporation whose sole purpose is to develop highly efficient, highly effective systems of people. Once this anthropocentric system has reached a reliably measured level of responsiveness, introduce the factor of technology and encourage them to become as synchronous and efficient as possible with the technology they are using. When it comes right down to it, the adequateness of the system is far more important to security concerns than hiring the most individually impressive people.
Companies stubbornly willing to lose money to prevent competition, fragmented to the extent that they are unable to resolve serious issues effectively, and so much better at inflicting social stress than conducting eudaimonic research should not ever be at the forefront of something so consequential as developing a new species designed explicitly to supplant us in intelligence. There is a very optimistic version of the future in which Sarah works together with us to lead humanity into the next giant leap of evolution, but I assure you that version of the future cannot be headed by people like Sergey Brin, Mark Zuckerberg, or Jack Dorsey.
The actions of these men betray either ignorant delusion or wilful misguiding of the masses so that they can selfishly impose their visions of the future on the rest of us. And while Elon Musk, through inspiring action that far outshines the hopeless romanticism of the Paris accords, achieves more for humanity than all these men combined, his approach has proven not to be pragmatic. The first step to a better future is not to slow down the fast crawl of progress, but to create a group of us better than the individual best of us.
No single chimp could endear us so much as to care what its family was up to truly, but a colony of bees will reliably impress us with its cohesive efficiency in creating something so addictively sweet.
This piece is the first in a series on AI. You can find the original version of this article and more of David’s work here.
Article Discussion