HomeArtificial IntelligenceAdobe claims its recent imaging model is one of the best yet

Adobe claims its recent imaging model is one of the best yet

Firefly, Adobe's family of generative AI models, doesn't have one of the best status amongst creatives.

The Firefly image generation model specifically has been mocked disappointing And faulty in comparison with Midjourney, OpenAI's DALL-E 3 and other competitors, with an inclination to distort limbs and landscapes and miss the nuances in prompts. But Adobe is attempting to treatment the situation with its third-generation model, Firefly Image 3, which might be released this week throughout the company's Max London conference.

The model, now available in Photoshop (beta) and Adobe's Firefly web app, produces more “realistic” images than its predecessors due to its improved ability to grasp longer, more complex prompts and scenes (Image 1 and Image 2) . Lighting and text generation functions. It should more accurately render things like typography, iconography, raster images and line drawings, Adobe says, and is “significantly” higher at depicting dense crowds and folks with “detailed features” and “quite a lot of moods and expressions.”

In my temporary, unscientific testing, Image 3 appears to be a step up from Image 2.

I wasn't capable of try image 3 myself. But Adobe PR sent a number of outputs and prompts from the model, and I managed to run the identical prompts over Image 2 on the net to get examples that I can compare the outputs from Image 3 to. (Remember that the outputs of image 3 might have been chosen.)

Notice the lighting on this headshot from Image 3 in comparison with the image below from Image 2:

From Image 3. Prompt: “Studio portrait of a young woman.” Photo credit: Adobe

Adobe Firefly

Same prompt as picture 2 above. Photo credit: Adobe

The output of Image 3 looks more detailed and lifelike to my eyes, with shadows and contrast which can be largely missing within the Image 2 example.

Here is a series of images showing the scene understanding of image 3:

Adobe Firefly

From Image 3. Prompt: “An artist sits at a desk in her studio, looking pensive and ethereal, surrounded by countless paintings.” Photo credit: Adobe

Adobe Firefly

“An artist sits at a desk in his studio and appears thoughtful and ethereal.” From image 2. Photo credit: Adobe

Note that the Image 2 example is kind of easy in comparison with the output of Image 3 by way of level of detail and overall expressiveness. The subject within the shirt of the image 3 example (within the waist area) is a bit strange, however the pose is more complex than that of the topic of image 2. (And the clothing of image 2 also looks barely different.)

Some of Image 3's improvements can undoubtedly be attributed to a bigger and more diverse training data set.

Like Image 2 and Image 1, Image 3 is trained for uploads Adobe Stock, Adobe's royalty-free media library, together with licensed and public domain content for which copyright has expired. Adobe Stock is always growing and so is the available training data set.

To fend off lawsuits and position itself as a more “ethical” alternative to generative AI providers that train indiscriminately on images (e.g. OpenAI, Midjourney), Adobe has launched a program to pay Adobe Stock contributors to the training dataset. (We note, nevertheless, that the terms of this system are quite opaque.) Controversially, Adobe also trains Firefly models on AI-generated images, which some consider to be a form of knowledge laundering.

Current Bloomberg reporting revealed that AI-generated images in Adobe Stock should not excluded from the training data of Firefly's image-generating models, a worrying prospect considering that these images may contain: reclaimed copyrighted material. Adobe has defended the practice, claiming that AI-generated images represent only a small portion of its training data and undergo a moderation process to make sure they don’t depict trademarks or recognizable characters or reference artist names.

Of course, neither diverse, more “ethically” obtained training data nor content filters and other security precautions guarantee a totally error-free experience – see user generation People flipping the bird with Image 2. The real test of Image 3 will come once the community gets their hands on it.

New AI-powered features

Image 3 enables several recent features in Photoshop beyond the improved text-to-image functionality.

A brand new “style engine” in Image 3, in addition to a brand new automatic stylization switch, allows the model to generate a wider range of colours, backgrounds and subject poses. They are fed into the reference image, an option that permits users to condition the model on a picture whose colours or tone they need to match their future generated content to.

Three recent generative tools – Generate Background, Generate Similar, and Enhance Details – use Image 3 to make precise edits to pictures. The (self-describing) Generate Background feature replaces a background with a generated background that blends into the present image, while Generate Similar provides variations of a specific a part of a photograph (e.g. an individual or object). Enhance Detail “fine-tunes” images to enhance sharpness and clarity.

If these features sound familiar, it's because they've been in beta on the Firefly web app for not less than a month (and Midjourney has been in beta for for much longer). This is their Photoshop debut – in beta.

Speaking of web apps: Adobe just isn’t neglecting this alternative path to its AI tools.

Coinciding with the discharge of Image 3, the Firefly web app is getting Structure Reference and Style Reference, which Adobe touts as recent ways to “expand creative control.” (Both were announced in March, but at the moment are generally available.) Structure Reference allows users to create recent images that match the “structure” of a reference image—for instance, a head-on view of a race automobile. Style reference is actually a mode transfer by one other name, preserving the content of a picture (e.g. elephants on African safari) while imitating the style (e.g. pencil sketch) of a goal image.

Here is the structure reference in motion:

Adobe Firefly

Original image. Photo credit: Adobe

Adobe Firefly

Transformed with structure reference. Photo credit: Adobe

And style reference:

Adobe Firefly

Original image. Photo credit: Adobe

Adobe Firefly

Transformed with style reference. Photo credit: Adobe

I asked Adobe if with all of the upgrades the pricing for Firefly image generation would change. Currently, the most cost effective Firefly Premium plan costs $4.99 per 30 days, which puts it below competitors like Midjourney ($10 per 30 days) and OpenAI (which puts DALL-E 3 behind a $20 ChatGPT Plus subscription per 30 days).

Adobe said its current tiers, together with its, will remain in place for now Generative credit system. The company also said its indemnification policy, which states that Adobe will settle copyright claims related to works generated in Firefly, can even not change, nor will its approach to watermarking AI-generated content. Content credits – metadata used to discover AI-generated media – will proceed to be routinely attached to all Firefly image generations on the net and in Photoshop, no matter whether or not they were created from scratch or partially edited using generative features.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read