[ad_1]
People tend to rejoice in the disclosure of a secret.
Or, at the very least, media outlets have come to realize that news of “mysteries solved” and “hidden treasures revealed” generate traffic and clicks.
So I’m never surprised when I see AI-assisted revelations about famous masters’ works of art go viral.
Over the past year alone, I’ve come across articles highlighting how artificial intelligence recovered a “secret” painting of a “lost lover” of Italian painter Modigliani, “brought to life” a “hidden Picasso nude”, “resurrected” Austrian painter Gustav Klimt’s destroyed works and “restored” portions of Rembrandt’s 1642 painting “The Night Watch.”The list goes on.
As an art historian, I’ve become increasingly concerned about the coverage and circulation of these projects.
They have not, in actuality, revealed one secret or solved a single mystery.
What they have done is generate feel-good stories about AI.
Are we actually learning anything new?
Take the reports about the Modigliani and Picasso paintings.
These were projects executed by the same company, Oxia Palus, which was founded not by art historians but by doctoral students in machine learning.
In both cases, Oxia Palus relied upon traditional X-rays, X-ray fluorescence and infrared imaging that had already been carried out and publishedyears prior – work that had revealed preliminary paintings beneath the visible layer on the artists’ canvases.
The company edited these X-rays and reconstituted them as new works of art by applying a technique called “neural style transfer.” This is a sophisticated-sounding term for a program that breaks works of art down into extremely small units, extrapolates a style from them, and then promises to recreate images of other content in that same style.
Essentially, Oxia Palus stitches new works out of what the machine can learn from the existing X-ray images and other paintings by the same artist.
But outside of flexing the prowess of AI, is there any value – artistically, historically – to what the company is doing?
These recreations don’t teach us anything we didn’t know about the artists and their methods.
Artists paint over their works all the time. It’s so common that art historians and conservators have a word for it: pentimento. None of these earlier compositions was an Easter egg deposited in the painting for later researchers to discover. The original X-ray images were certainly valuable in that they offered insights into artists’ working methods.
But to me, what these programs are doing isn’t exactly newsworthy from the perspective of art history.
The humanities on life support
So when I do see these reproductions attracting media attention, it strikes me as soft diplomacy for AI, showcasing a “cultured” application of the technology at a time when skepticism of its deceptions, biases, and abuses is on the rise.
When AI gets attention for recovering lost works of art, it makes the technology sound a lot less scary than when it garners headlines for creating deep fakes that falsify politicians’ speech or for using facial recognition for authoritarian surveillance.
These studies and projects also seem to promote the idea that computer scientists are more adept at historical research than art historians.
For years, university humanities departments have been gradually squeezed of funding, with more money funneled into the sciences. With their claims to objectivity and empirically provable results, the sciences tend to command greater respect from funding bodies and the public, which offers an incentive to scholars in the humanities to adopt computational methods.
Art historian Claire Bishop criticized this development, noting that when computer science becomes integrated into the humanities, “[t]heoretical problems are steamrollered flat by the weight of data,” which generates deeply simplistic results.
At their core, art historians study the ways in which art can offer insights into how people once saw the world. They explore how works of art shaped the worlds in which they were made and would go on to influence future generations.
A computer algorithm cannot perform these functions.
However, some scholars and institutions have allowed themselves to be subsumed by the sciences, adopting their methods and partnering with them in sponsored projects.
Literary critic Barbara Herrnstein Smith has warned about ceding too much ground to the sciences. In her view, the sciences and the humanities are not the polar opposites they are often publicly portrayed to be. But this portrayal has been to the benefit of the sciences, prized for their supposed clarity and utility over the humanities’ alleged obscurity and uselessness. At the same time, she has suggested that hybrid fields of study that fuse the arts with the sciences may lead to breakthroughs that wouldn’t have been possible had each existed as a siloed discipline.
I’m skeptical. Not because I doubt the utility of expanding and diversifying our toolbox; to be sure, some scholars working in the digital humanities have taken up computational methods with subtlety and historical awareness to add nuance to or overturn entrenched narratives.
But my lingering suspicion emerges from an awareness of how public support for the sciences and disparagement of the humanities means that, in the endeavor to gain funding and acceptance, the humanities will lose what makes them vital. The field’s sensitivity to historical particularity and cultural difference makes the application of the same code to widely diverse artifacts utterly illogical.
How absurd to think that black-and-white photographs from 100 years ago would produce colors in the same way that digital photographs do now. And yet, this is exactly what AI-assisted colorization does.
That particular example might sound like a small qualm, sure. But this effort to “bring events back to life” routinely mistakes representations for reality. Adding color does not show things as they were but recreates what is already a recreation – a photograph – in our own image, now with computer science’s seal of approval.
Art as a toy in the sandbox of scientists
Near the conclusion of a recent paper devoted to the use of AI to disentangle X-ray images of Jan and Hubert van Eyck’s “Ghent Altarpiece,” the mathematicians and engineers who authored it refer to their method as relying upon “choosing ‘the best of all possible worlds’ (borrowing Voltaire’s words) by taking the first output of two separate runs, differing only in the ordering of the inputs.”
Perhaps if they had familiarized themselves with the humanities more they would know how satirically those words were meant when Voltaire used them to mock a philosopher who believed that rampant suffering and injustice were all part of God’s plan – that the world as it was represented the best we could hope for.
Maybe this “gotcha” is cheap. But it illustrates the problem of art and history becoming toys in the sandboxes of scientists with no training in the humanities.
If nothing else, my hope is that journalists and critics who report on these developments will cast a more skeptical eye on them and alter their framing.
In my view, rather than lionizing these studies as heroic achievements, those responsible for conveying their results to the public should see them as opportunities to question what the computational sciences are doing when they appropriate the study of art. And they should ask whether any of this is for the good of anyone or anything but AI, its most zealous proponents and those who profit from it.
This article by Sonja Drimmer, Associate Professor of Medieval Art, University of Massachusetts Amherst is republished from The Conversation under a Creative Commons license. Read the original article.
[ad_2]
Source link