7 min read
The Artist In The Machine

by William Thomas

In October 2018, the first piece of computer-generated artwork for sale by a major auction house – a soft-focus portrait created by a so-called Artificial Intelligence – fetched a winning bid of $432,500 by an anonymous buyer phoning in to Christie's in New York. The curio was expected to fetch $7,000 to $10,000. “Edmond de Belamy, from La Famille de Belamy,” was offered by the Paris-based collective, Obvious Art,  whose members Pierre Fautrel, Hugo Caselles-Dupré and Gauthier Vernier have been experimenting with art and machine for less than a year, CNN noted. The piece was signed by the artist. 

     "Its name is derived the core GAN algorithm used to depict “a blurry and unfinished image of a man,” Reuters unenthusiastically reported. Generative Adversarial Networks have long been employed by visual artists. "We fed the system with a data set of 15,000 portraits painted between the 14th century to the 20th," explained collective member Hugo Caselles-Dupre, who pointed to other portraits in the ever-expanding portfolio.

     “Can artificial intelligence produce a masterpiece?” asked the New York Times. Mario Klingemann, an artist recognized for this work with machine learning, likened “Edmon de Belamy” to “a connect-the-dots children's painting." Is the Belamy bombshell even lowercase “art”? The remaining 10 machine-created paintings of the Belamy clan displayed here resemble Renaissance figures whose misshapen faces have survived a car wreck.   

     Yet, as an artist friend points out, the distorted visages haunting Francis Bacon’s paintings (at left) sell “for millions”. And for those who prefer bad acid trips, Ralph Steadman’s brilliant political portraits are (thankfully) in a mind-warping class apart. 


Just as human brains compare ever new encounter with a lifetime store of memories, relentlessly seeking patterns, “to the computer, it's all just pattern-matching,” asserts George Johnson. The basic formula for robot artists proceeding step-by-algorithmic-step is deceptively simple: If this… then this.  If not this… then this. 

     The emergent power of this basic instruction comes through each compounding iteration. Repeat enough times with enough introduced variations and each new work evolves the machine’s ability to combine elements of music, prose, sculptural or painterly composition in new ways. And novelty certainly fits a rough-and-ready description of “art”. 


Creativity “happens when a new analogy is invented. When your mind connects two things that aren't usually connected,” posits Internet pioneer David Gelernter.   

     “Constraints and unpredictability, familiarity and surprise, are somehow combined in original thinking,” chimes in Margaret Boden, OBE. A cognitive science professor in the Dept. of Informatics at the U. of Sussex, her work embraces artificial intelligence, philosophy, cognitive and computer science. Artistic creations, Boden believes, “concern novel ideas which not only did not happen before, but which – could not have happened before.” When any artist – human or otherwise – comes up with a new analogy, a fresh comparison, an unexpected linking, we see the world in a new light.

     “Generative Adversarial Networks (GANs) analyze tens of thousands of images, learn from their features, and are trained with the aim to create new images that are undistinguishable from the original data source,” explains Obvious Art’s Caselles-Dupre. By deleting any image lacking enough features in common with the works it’s viewing, robot art begins as mere mimicry. And then… 

     “They also reproduce the notion of novelty,” Caselles-Dupre continues. “Even with the same inputs, the algorithm will each time render a different result.” 


A three-year-old hominid addressing a newly painted living room with a box of crayons may also render unique drawings. But that redecorated wall is unlikely to be displayed in the Louvre. Creativity is “a combination of novelty and significance,” contributes Douglas Hofstadter, professor of cognitive science and computer science at Indiana University. “What matters is to be novel and have some depth.”


Perhaps no other medium is as immediately accessible in depth as music. Especially live symphonies. Consciously or more often, subconsciously, all artistic expression builds on creations that have come before.

     “Whenever Mozart heard something,” says renowned composer David Cope, "he was able to digest it and store it in his database. He could recombine it with other things so that the output would be hardly recognizable." After writing increasingly elaborate software to help him compose music for more than 30 years, Cope stands this human-machine debate on its integrated circuits when he observes that “humans compose like computers,” writes Chris Wilson.     

     "We don't start with a blank slate," Cope continues in his own words. "In fact, what we do in our brains is take all the music we've heard in our life, segregate out what we don't like, and try to replicate [the music we like] while making it our own." Just like any aspiring AI artiste.   

     Examining numerous musical notations, Cope’s first program named EMILY HOWELL vacuumed thousands of passages from chorales… spent a moment looking for patterns… “then altered and recombined bits and pieces into new works that fit the patterns it had found,” Chris Wilson reports.     

     “Chorales were a natural place for a program to start because, like canons and fugues, they operate according to a set of musical rules governing harmony and structure,” Wilson adds. EMILY’s successor, EMMY went on to analyze entire compositions, looking for repetitive themes. Working collaboratively, Cope listened to his computer’s musical suggestions, incorporating passages that resonated with him into his finished piece. Thanks to EMMY, a project he had nearly abandoned after a seven-year brawl with composer’s block took only two weeks to finish. Those attending the debut of Cope’s long-anticipated opera, "Cradle Falling" never heard of EMMY. But they did know what moved them to tears. The ecstatic Richmond Times Dispatch called a particular passage "a supreme dramatic moment, punctuated by the captivating beat of drums." Take a bow, EMMY.


“All these projects have in common the replacement of part of the creativity process. Each one of them is different in the level of human intervention it involves,” declares Obvious Art. “Once the whole process will have been automated, we will have created a machine that is capable of being creative, in the same way a human is.” Really? Can pictures kludged from formulaic rules invoke the same “presence” imbued by human-created art to which other humans respond?   Political art by Ralph Steadman

     Great art is so much more than painting or composing music or poetry by numbers – even very clever numbers. To produce “art” in big air-quotes, an algorithm must proceed from and produce inspiration. As AI Art pioneer Philip Galanter reminds us, “Art is more than the creation of objects. It is also a progression of ideas with a history and a correspondence to the larger culture.” 

     “Where is your ecstasy?” I ask faux-artists like min G max D. “Where is your outrage?” Even if fed all available data on Richard “Millstone” Nixon, no machine could produce pictures embodying the truths revealed in Steadman’s anguished distortions. Because 1’s and 0’s are incapable of feeling that British artist’s fear and loathing of the Amerika he landed in. With the Doctor of Gonzo, Hunter S. Thompson as guide.                 



Now imagine an Advanced Intelligent Machine tasked with creating a picture that is beautiful, Obvious Art urges. “Beauty is a subjective value, and there is no right or wrong answer to this. But there is a statistically optimal one

     “An option would be to put labels (meta-data) on the pictures that serve as input (food). If you can tell me which images have been enjoyed the most, I can accentuate my training on these pictures, and create an image that is closer to those.” Looks like we’re heading back toward mere mimicry. But by “labelling input pictures with emotions,” insists a member of this French AI art collective, I can “create something that reflects my personality.” Better make that “co-create”, guys. Because what is being reflected is your bias of choice and exclusion – not a truly emotive “personality”. 


In their online Handbook on Evolutionary Art and Music, Penousal Machado and Juan Romero present Ralph’s Bell Curve of aesthetics as another “fitness test” for art. Ralph’s model is “based on an empirical evaluation of many fine art works, in which paintings have been found to exhibit a bell curve distribution of color gradient.” This test, Machado and Romero claim, “is very useful for automatically evolving… images with painterly, balanced and harmonious characteristics.”   

     Spontaneity? Serendipity? Arresting discord? Not within programmed "statistical parameters". What if color is not a primary aspect of beauty? 

     In “AARON’s Code: Meta-Art, Artificial Intelligence and the Work of Harold Cohen,” the prophet of a still imminent AI Singularity points out that brightness is a far more important element of color than hue. The effectiveness of a particular color is “primarily based on its ‘brightness’ relationship to other colors,” Ray Kurzweil reminds us.   To test this beauty-without-color assertion, check out black-and-white prints by Edward Weston, Ansel Adams, Dorothea Lange or Walker Evans.


“The fact is that art is not, and never has been, concerned primarily with the making of beautiful or interesting patterns. The real power, the real magic, which remains still in the hands of the elite, rests not in the making of images, but in the conjuring of meaning,” observed the late Harold Cohen, creator of the world’s first AI artist. “Art is a meaning generator, not a meaning communicator.” 

     “We don’t value art because it can communicate particular meanings,” interprets Emma Callen. “Rather, we delight in the artist’s ability to present us with something inspiring; we generate our own meanings from what an artist awakens in ourselves.”


The ability of any art form to invoke an emotional response in the viewer/participant depends not so much on the latest gadget, but the artist’s ability to channel her or his emotional experience into each moment of creation. While faux-emotions can be programmed by rote into a machine, no robot “artist” (or plastic sexbot) feels what they profess. 

     Is feeling and expressing emotion the uncrossable gulf forever separating human and machine-made “art”? Does this distinction matter to a learning machine? To insist that computer cannot be creative until it can simulate all the nuances of human emotion misses two salient points: 

1. Simulating a human emotion is not a human emotion. 

2. Machines already display their own temperament. Think not? How does your computer respond when you repeatedly curse and pound the keyboard? How many times has your wounded car made it to the car-hospital after resonating to your most endearing entreaties and being patted on the dashboard?

Photo Credits:

“Le comte de Belamy”, an artwork created by a machine.

"Creative paintings" by  an AI named CAN.

Le Marais Edition 1 of 1  by an AI using 15,000 artworks representing landscapes.