When he was two years old, Stephen Thaler had a near-death experience. Thinking it was candy, he ate two dozen cold medicine tablets, and washed them down with kerosene that, in a parenting misstep too common in the 1950s, had been stored in a Coke bottle.
“I had the typical experience of falling through the tunnel and arriving at what looked like a blue star. Around it I saw little figures, little angels around a sphere,” Thaler, now 74, told Art in America from the suburban Missouri office of his AI company, Imagination Engines. “The most trusted people in my life—my dog and my grandmother—were there. And she said, ‘It’s not your time.’” When Thaler woke up in the hospital, his grandmother and his dog were waiting for him. That was perplexing. If they were alive, yet appeared in his vision, he reasoned, the powerful experience was no evidence of heaven but was fake or, more precisely, a visual spasm created by a brain at the apex of trauma.
That link between trauma and creativity (the vision Thaler’s brain produced) would prove instrumental for Thaler more than 50 years later, in 2012, when he induced trauma in an AI system he’d invented in the ’90s—Device for the Autonomous Bootstrapping of Unified Sentience, or DABUS—and it created an image that marks a stunning moment in this history of art: according to Thaler, it is among the first artworks to have been created by an autonomous artificial system. He has spent years trying to get that image copyrighted, listing DABUS as its author. The United States Copyright Office currently grants copyright only to human beings; Thaler’s invention, and his legal struggle, speak to one of the central debates currently raging in visual culture: can machines create art? “He is a mythical figure in the field of A.I. intellectual property,” Dr. Andres Guadamuz, a leading expert in emerging technologies and intellectual property, said of Thaler. “Nobody knows for sure what he’s about. Is he a crank? A revolutionary? An A.I. sent from the future?”
Many computer scientists have invented AI systems that create autonomously, but Thaler is one of the few who is comfortable using the word “sentient.” “Is DABUS an inventor? Or is he an artist?” he said. “I don’t know. I can’t tell you that. It’s more like a sentient, artificial being. But I even question the artificial part.”
Thaler makes for an unassuming Dr. Frankenstein. He dresses in sweater vests, like a frumpy professor, his silver hair teased into tall strands that curl delicately at his forehead. His lab in St. Louis takes up an otherwise empty floor of a squat three-story building in a shopping center that contains a Sam’s Club, a Walmart Supercenter, a plastic surgeon’s office, and a church. There’s wall-to-wall carpeting, a microwave, some small robots, a bowl full of Nature Valley granola bars and a large jug of instant coffee. A plush orange and black striped spider hangs over his desk.
He grew up not far from there, a precocious boy who obsessed over crystalgrowing kits after receiving his first one in middle school. “I was fascinated with the idea of things self-organizing into such beautiful forms,” he said. He would go on to get a National Science Foundation grant in high school for a research project he devised. That led to a stint at a crystal-growing lab in Malibu, and eventually a master’s in chemistry at UCLA. He started his PhD at UCLA but found academic politics there distasteful, and followed his adviser to the University of Missouri-Columbia (MU).
“I wasn’t making a fundamental scientific discovery there,” he said, “and I always thought ‘I’m a pioneer, I want to be a pioneer, and do something truly outrageous.’”
MU happens to have the most powerful university research reactor in the United States, and Thaler used it to study how silicon reacts to radiation damage, thus potentially producing electronically valuable impurities within the material. One of his jobs was creating computer models that could simulate the knock-on damage of atoms.
“I started playing games. I was building lattice models in which I could actually freeze in smiley faces, and when I would damage it, it didn’t create arbitrary patterns but slight variations on them,” said Thaler. The experiments cemented something that he had suspected for a long time: “An idea is just a corrupted memory.”
In the 1980s Thaler was experimenting with neural networks, technology that mimics the architecture of the brain, and using damage to provoke what he calls “novel experience.” He would stress out the synthetic brain until the system started making erroneous associations between different concepts. He created the DABUS system in his garage in 1992. By introducing noise, a mathematical representation of randomness that human senses register as static, he found he could simulate perturbation. As noise was injected into the system, it began to make new associations between its different training data, thus generating new ideas. Simultaneously, DABUS could recognize which of these new associations was useful and which wasn’t, until it got overwhelmed by the influx of noise, and effectively stalled.
At the time, artificial intelligence was more science fiction than reality—it would be almost a decade before Steven Spielberg released A.I., his 2001 movie based on a 1969 short story about a robot child—and Thaler’s attempts to find investors for DABUS fell flat. “They thought it was crazy,” he said. “They said, ‘That’s impossible, machines cannot invent anything,’” In fact, Thaler and DABUS were ahead of their time. His implementation of noise is the same principle that powers the generative AI systems Midjourney and OpenAI’s DALL·E that have taken over the tech world in the past few years. The only difference is scale: DABUS was trained to create from the 4,000 images Thaler had on his camera roll. By comparison, Midjourney was trained on 5.8 billion images scraped from the internet, and it receives constant input from its tens of millions of users. “I suffer from insomnia late at night over this!” Thaler told me over email. “If you actually have the patience to read through my patents, from the ’90s, and early 2000s, [big AI companies are] simply adding more money and resources to what I’ve already done. Those are my inventions.”
Despite the lack of investor interest, Thaler continued to tinker with DABUS. In 2012 he introduced a different kind of noise: simulation of the near-death experience he’d had as a child. He intentionally severed a portion of DABUS neural nodes from the rest of the network, and found that it caused a reaction similar to a human’s end-of-life light show, something Thaler calls “life review and then the manufacturing of novel experiences.” Afterward, DABUS began reviewing its data or, as Thaler puts it, its “memories,” and, from them, produced an image showing train tracks threading through brick archways that it called A Recent Entrance to Paradise.
“It’s proto consciousness, you have a continual progression or parade of ideas coming off it as a result of this noise inside,” Thaler said. “This is how our brains work, we think mundane things exist in some common state, and then the tiger is chasing you off the path and you climb a tree or do something original you haven’t done before. That’s the cusp that we live on.”
Thaler might never have sought legal acknowledgment of DABUS as a creator had fate not introduced him to a man named Ryan Abbott. A physician, lawyer, and PhD, Abbott was working as an intellectual property lawyer for a biotech firm when a vendor approached the firm with a new service: machine-learning software that could scan a giant antibody library and determine which ones should be used for a new drug.
“I thought, well, when a person does that, they get a patent,” Abbott, who is now a professor at the University of Surrey School of Law in England, told Art in America. “But what about when a machine does that?”
He began researching machine learning and came across Thaler. In Thaler and DABUS, Abbott saw a means of testing out patents and copyrights invented by autonomous machines. The two men began speaking with judges and other legal experts about the possibility of obtaining patents and copyright for DABUS’s creations. At the time, a decade before generative AI became daily news fodder, they were met with utter disbelief that DABUS was capable of such production. But even now, Thaler and Abbott find consistent obstruction to their goal of getting DABUS, and thus Thaler, recognized for its creative output.
“We submitted [A Recent Entrance] as an AI generated work on the basis that Dr. Thaler had not executed the traditional elements of creativity,” Abbott said, “with the aim that AI generated work should be protected and someone should be able to accurately disclose how a work was made.”
Abbott and Thaler’s push for copyright brings up a very basic question for artists today: how do we locate agency and creativity when we make things with machines? When is it our doing, and when is it “theirs”? This question follows the arc of history as humans design increasingly complex tools that work independently of us, even if we designed them and set them into motion. Debates have raged in public forums and in lawsuits regarding to what extent a model like Midjourney can produce genuinely novel images or whether it is just randomly stitching together disparate pixels based on its training data to generate synthetic quasi-originality. But for those who work in machine learning, this process isn’t all that different from how humans work.
“Everything is always going to be a product of how its system is trained,” Phillip Isola, an associate professor at MIT with a long history in developing AI-enabled artistic tools, told Art in America, referring to claims that because an AI has been trained on preexisting images, it isn’t displaying original creativity. “But humans are too.”
Two or three years ago, Isola said, he would have agreed that describing generative AI as stitching together training data in a “fairly superficial way would have been a fairly accurate characterization.” But AI models have grown more sophisticated from reinforcement learning via human feedback, or RL HF. With RL HF, humans rate not just accuracy—say, whether a human hand in an image has five fingers—but how much they like the image the AI model created. This process, Isola argued, has shifted generative AI from predictive creation—or fancy autocomplete—into something different. “Now, I think these [AI] are extrapolating in ways that are similar to the ways humans might be inspired by several different artistic styles, and precompose those into new creations,” Isola said. “Before, they were just imitating us. But now, they try to not imitate what humans would do, but try to learn what humans would want.”
This turn in artificial intelligence is something that German artist Mario Klingemann has been playing with in his artistic practice.
In late 2021, Klingemann launched Botto, an AI image generator that produces 4,000 images weekly. At the end of each week, Botto presents 350 of these creations to a community of more than 5,000 who have purchased stakes in Botto. The community then votes on which images to mint and auction on NFT sales platform SuperRare. Each successive voting period provides the AI additional feedback about what images are successful. Sales proceeds are then split between Klingemann, the community, and the cost to maintain Botto. Such a project makes it blatantly obvious that, yes, one can make interesting, engaging art with AI; it just takes a particularly interesting artist to make that happen. “The purpose of contemporary art is to constantly push the boundaries, make people question, is this still art? Why is this art? We got rid of everything in art over the past 100 years, all that at one point defined art,” Klingemann told Art in America. “Maybe we’ve come to the point where the only thing we can do is remove the artist, the human artist, and still call something art.”
Despite his best efforts, Klingemann hasn’t been able to separate himself from Botto. Even though Botto has its own style that diverges from Klingemann’s tastes, has exhibited and sold work, and has received press coverage and critical analysis, Klingemann knows that Botto will never be considered an artist independent of him. Botto is missing something critical: a self. Klingemann will continue to get credit for Botto, and Thaler will continue to meet skepticism that DABUS can produce work autonomously.
There is a reason AI models are called image generators: Generating and creating are separated, linguistically, by will. Creation implies action, causing, making, whereas generating has its etymological roots in the Latin verb generare, to give birth or propagate. Nature is the result of this supposedly automatic generation, while creation assumes a degree of consciousness. It seems likely that we will deem AI intelligent, creative, or sentient only when it betrays the barest whiff of agency, because intelligence without selfinterest is nonhuman intelligence indeed. A similar principle has undergirded art for millennia. Art is what people make.
In his 2022 book, Art in the After-Culture, art critic Ben Davis writes, “‘Art’ stands in symbolically for the parts of cognition that do not seem machine-like.” Accordingly, the loose definition of art has changed to keep pace with the advancement of machines. Craft is not really art because machines can make tables and sweaters. The advent of cameras, which made rendering a realistic image as simple as pressing a shutter button, initiated Impressionism, Cubism, and the long arc of conceptual art. In contemporary art, the institutions, galleries, and other gatekeepers have increasingly clustered around the figure of the artist and the individual life story, and run away from the material object, which can always be replicated anyway. We are left clutching that indefinable spark as some final differentiator between humans and machines.
For Thaler, that differentiator is already meaningless. “What’s an artist? A bunch of associations, a guy with a beret on his head and a crazy mustache,” he said, arguing, in essence, that the designation comes from social validation, from playing the part. “Thanks to this AI, I do everything from medicine to materials discovery to art and music. I do everything as a result of it and that’s a dream come true.”
If AI images take over the visual field, copyright itself may become obsolete. At the crypto-conference FWB Fest last year, graphic designer David Rudnick proposed that sometime in the near future, most images online will be AI-generated. A 2022 research paper by Epoch—a research initiative on AI development—estimated that between 8 and 23 trillion images are currently on the internet, with an 8 percent yearly growth rate. Meanwhile, current AI models generate 10 million images per day with a 50 percent growth rate, according to researchers. If those numbers hold, we will see what art writer Ruby Justice Thelot recently called a “pictorial flippening” by 2045; “flippening,” according to Thelot, being the point where the visual data from which image generators learn shift from that produced by humans to that created by AI.
“The artificial will no longer try to mimic the human-made but this new amalgam of network-made and human-made,” Thelot wrote for Outland Art in July. “The blurring will be complete, and the modern world will be precipitated into a permanent state of hyperreality, where images will no longer be tethered to a human maker and images will be made for and by machines.”
Over the years, DABUS has been many things to Thaler: creator of spacecraft hulls, toothbrushes, and Christmas carols. It has invented robots and been trained as a stock market predictor. Whether or not it will ever be legally credited for its artwork is for the future to decide. In June 2022, Abbott sued US Copyright Office director Shira Perlmutter on behalf of Thaler when the court not only refused to grant DABUS authorship but also didn’t allow Thaler to claim copyright of the image as DABUS’s creator. The case eventually went before US District Judge Beryl A. Howell in Washington, D.C., who ruled against Thaler and Abbott this past August, writing in her decision that Abbott had “put the cart in front of the horse” by arguing that Thaler is entitled to a copyright that doesn’t exist in the eyes of the law. Absent human involvement, there is no copyright protection, according to Howell, because only humans need to be incentivized to create. The decision leaves DABUS in the grayest of gray areas: If, as Thaler claims, he himself had nothing to do with the creation of the image, and if DABUS lacks personhood—and thus a claim to copyright—we are left with a vacuum. No one made this work.
The article “Stephen Thaler’s Quest to Get His ‘Autonomous’ AI Legally Recognized Could Upend Copyright Law Forever” by Lmiller was published on 08/01/2024 by www.artnews.com