قالب وردپرس درنا توس
Home / Fitness and Health / What did the ancient Romans look like?

What did the ancient Romans look like?



  • Daniel Voshart, a designer in Toronto, used machine learning and Photoshop to convert busts of Roman emperors into photorealistic images.
  • Voshart wanted to present the emperors as they would have seen it at the end of their reign, regardless of any diseases that could have changed their appearance.
  • Artists have been known to exaggerate the good looks of the ruling class, so these images are probably more fair.

    Artists have historically exaggerated how attractive rulers were in their portraits and sculptures. Queen Caroline of England said it best in 1795 when she described the moment when she first looked at her fiancé, King George IV: “I find him very fat and by no means as beautiful as his portrait.”

    ;

    Now a Toronto-based designer is correcting some of those creative freedoms. Daniel Voshart merged machine learning, Photoshop and historical records and turned 54 busts of Roman emperors from the principality (27 BC to 285 AD) into photorealistic images.

    Originally, Voshart took over the work as a kind of quarantine project. “I think it was the nature of the pandemic that I was thinking about one more thing, and maybe the morbid detail of the life of the emperors attracted me, “he says Popular mechanics. “I was working on a science fiction show about 2,000 years in the future. Maybe I was drawn to thinking about the past.”

    But Voshart couldn’t have predicted that orders for his first prints – featuring emperors like Augustus, Nero, and Decius – would explode on his Etsy page. “I didn’t know the answer would be, that I would actually do it [have to] Reduce working hours to meet demand, “he says.

    54 Roman emperors in a print that looks photorealistic

    Daniel Voshart

    The process wasn’t just a matter of plugging in photos of the busts to spit out a perfect human face, says Voshart. To create the rough blueprints for each emperor’s face, Voshart relied heavily on a machine learning tool called Artbreeder. The open source software uses Generative Adversary Networks (GANs) to create an image.

    If this sounds familiar, it’s likely because you’ve heard of deepfakes or synthetic media, which are widely used for nefarious purposes. GANs, the underlying technology in deepfakes, can help algorithms get beyond the simple task of classifying data into the realm of data creation – in this case, images. This happens when two GANs try to trick each other into thinking a picture is real. With just one image, a proven GAN can create a video clip of Richard Nixon, for example.

    While Voshart’s Roman Emperors aren’t deepfakes, they share a similar technological framework – they’re just different machine learning applications. In particular, Artbreeder NVidea uses StyleGAN, an open source GAN that computer scientists created back in December 2018.

    In a virtual talk on “GANs for Good” on September 30th, Anima Anandkumar, head of machine learning research at NVIDIA, explains how the technology works. With a technique called disentanglement learning, the GAN can separate and control certain style elements better in isolation, which artists like Voshart can of course do much better than machines.

    This content is imported from YouTube. You might find the same content in a different format, or you might find more information on the website.

    “The people are great at it,” Anandkumar said in the lecture. “We have different concepts that we learned as infants and we did that unsupervised. That way we can now compose and create entirely new images or concepts.” In practice this means that a user has more control over which properties of a source image he wants to use in the new one.

    This content is imported from {embed name}. You might find the same content in a different format, or you might find more information on the website.

    Joel Simon, the developer of Artbreeder, explains Popular mechanics that everything depends on how the neural networks of the program “Space” represent in the pictures.

    “When a picture is ‘uploaded’, the face is cropped and then a search is performed to find the closest point in the room for that picture,” he explains. “Once in this ‘space’ it’s easy to ‘move’ by adding or subtracting numbers that correspond to values ​​like age or gender, which are referred to here as ‘genes’. By adding color, it is done this in a very intelligent way, not only by manipulating pixels, but also by moving through the space of all faces. “

    This makes it easier for an artist like Voshart to upload training data – in this case, around 800 samples of Roman imperial busts – to get a hyper-realistic face with fewer artifacts or anomalies introduced by the software.

    Elagabalus

    Daniel Voshart

    Nevertheless, Voshart still had a lot of work to do after using the Artbreeder software. In his testing phase, before making the Roman emperor’s faces depicted in his final prints, the results were fraught with anomalies.

    “The result shows lots of weird artifacts and turns features into an average face, which is the opposite of what you want if you want to keep an interesting expression,” says Voshart. “My process was downloaded more from Artbreeder, changed in Photoshop, and the process was repeated by reloading it in Artbreeder.”

    Although fixing errors in the generated images could be a headache, Voshart says there is “no remote chance” that he could have done the job without the power of machine learning.

    This content is created and maintained by a third party and is imported onto this page so that users can provide their email addresses. You may find more information on this and similar content at piano.io


Source link