1 big thing: Art-ificial intelligence (by Axios)
- cgartadvisory
- 6 nov 2022
- 2 Min. de lectura

Artists often use cutting-edge technology to create their work, but technology tends to change the art world only slowly and infrequently. Right now could be one of those times.
Why it matters: AI technology has created a serious debate in the art world around issues of authorship and ownership — a debate that feels much more consequential than, say, arguments over monkey copyright.
Driving the news: In San Francisco, an art show curated by a venture capital firm — in partnership with an AI company valued at $20 billion — is making the case for AI-generated art as "legitimate work" by serious artists.
Meanwhile, in New York, the highly-respected Perrotin gallery is showing the work of MSCHF, a venture-backed for-profit limited liability Delaware company that makes money by generating a seemingly endless stream of middlebrow hipster conceptualism.
The group's first gallery show includes AI-generated pictures of feet that don't exist, which are then painted onto canvas by "factory labor."
Between the lines: Both shows seem to be predicated on the assumption that there's something transgressive about exhibiting AI art.
On the face of it, that's odd, given that the history of market-ratified AI-generated art dates back at least as far as 2018, when not only did I buy some AI-generated art myself, but an AI-generated portrait sold at Christie's for $432,500 to what MSCHF describes as an "overly credible" buyer. (I think they mean credulous.)
And in the world of NFTs, there's no end of AI-generated projects, with no one raising so much as an eyebrow.
The intrigue: What changed is the arrival of a new generation of AIs, like Dall-E 2, Stable Diffusion, and DreamBooth. These AIs can output images indistinguishable from those of professional illustrators — just because they have learned to copy the work of those humans.
Artists have always learned from other artists. But when a machine is trained on the corpus of a single artist, that raises gnarly questions of authorship and copyright.
Ogbogu Kalu, for instance, a Nigerian engineer in Canada, has created AI models trained to emulate the comic-book style of Holly Mengert and James Daly III — without the permission of either illustrator.
As Mengert points out to Andy Baio, she couldn't give Kalu permission to train his model on her work even if she wanted to, because so much of what she does involves characters owned by corporations like Disney or Penguin Random House.
The bottom line: Art is getting dumber (see: Beeple, KAWS, TikTok, etc) just as AIs are getting smarter. Right around now, we're reaching the point at which the lines are beginning to cross.
Comments