Subscribe Us

Adobe Invites You to 'Embrace Technology' with New Firefly Video Generator

Adobe Invites You to 'Embrace Technology' with New Firefly Video Generator

On Monday, before its Adobe Max event, Adobe launched video generation capabilities for its Firefly AI platform. Starting today, users can test Firefly's video generator on Adobe's website for the first time, or try the new AI-powered video feature, Generative Extend, in the Premiere Pro Beta app.

On the Firefly website, users can try the text-to-video model or image-to-video model, both generating up to five seconds of AI-created video. (The web beta is free to use, but potentially comes with usage limits.)

Adobe Invites You to 'Embrace Technology' with New Firefly Video Generator

Adobe claims it has trained Firefly to produce both dynamic content and photorealistic media, depending on the prompt features. At least theoretically, Firefly can create videos with text, a task AI image generators have historically struggled with. The Firefly video web app includes toggle camera pans, camera movement intensity, angle, and shot size.

In the Premiere Pro Beta app, users can try the Generative Extend feature to extend video clips up to two seconds. This feature is designed to generate additional beats in a scene, keep camera movement continuous, and create subject movements. Background audio will also be enhanced; Adobe is quietly working on an AI audio model for the public's first taste. The background audio extender will not recreate sounds or music, however, to avoid copyright lawsuits from record labels.

In demos shared prior to launch, Firefly's Generative Extend feature created more impressive and practical videos compared to its text-to-video model. The text-to-video and image-to-video models lack the polish or wow factor found in AI video from Adobe's competitors, like Runway's Gen-3 Alpha or OpenAI's Sura (although the latter is yet to be shipped). Adobe says it has focused more on AI editing features rather than creating AI videos, which may be more appealing to its user base.

Adobe's AI features must strike a delicate balance with its creative audience. It aims to lead in the crowded space of AI startups and tech companies pushing impressive AI models. On the other hand, many creators are concerned that AI features may soon replace the work they've been doing for decades with their mouse, keyboard, and stylus. This is why Adobe's first Firefly video feature, Generative Extend, uses AI to address a current problem for video editors - your clip isn't long enough - rather than creating new video from scratch.

Adobe Invites You to 'Embrace Technology' with New Firefly Video Generator

Alexander Kostin, Adobe's Vice President of Generative AI, said, "Our audience is the most pixel-perfect on Earth. They want AI to help them enhance, vary, or create new assets rather than modify existing ones. So for us, it's crucial to focus on generative editing first and creative creation second."

Production-grade video models that simplify editing - that's the formula Adobe used to achieve initial success with Firefly's image model in Photoshop. Adobe executives have previously stated that Photoshop's Generative Fill feature is one of the most used new features of the past decade, largely because it complements existing workflows and speeds them up. The company hopes to replicate this success with video.

Adobe is trying to be cautious with creators, reportedly paying photographers and artists $3 for each minute of video used to train its Firefly AI model. Still, many creators remain wary or fear that AI tools will render them obsolete. (Adobe also announced AI tools on Monday to automatically generate content for advertisers.)

Kostin reassures concerned creators that generative AI tools will create more demand for their work, not less: "If you think about the needs of companies wanting to create individualized and hyper-personalized content for any consumer they interact with, there is limitless demand."

Adobe's AI chief urges people to consider how other technological revolutions have benefited creatives, comparing the advent of AI tools to digital publishing and digital photography. He notes how these advancements were initially seen as threats and says that if creators reject AI, they'll have a tough time ahead.

Kostin said, "Elevate your creativity, enhance your skills, and become a creative professional who can produce 100 times more content using these tools. Content is needed, and now you can do it without sacrificing your life. Embrace technology. This is the new digital literacy."

Firefly will also automatically insert "AI-generated" watermarks in the metadata of videos created this way. Meta uses identification tools on Instagram and Facebook to label media as AI-generated with these labels. The idea is that platforms or individuals can use such AI identification tools to determine authenticity as long as appropriate metadata watermarks are included. However, Adobe's videos will not have visible labels by default to indicate they are AI-generated in a human-readable way.

Adobe has specifically designed Firefly to produce "commercially safe" media. The company claims it has not trained Firefly on images and videos that include drugs, nudity, violence, political figures, or copyrighted material. In principle, this means Firefly's video generator should not create "unsafe" videos.

Now that the internet has free access to Firefly's video model, we'll see if these claims hold true.

Post a Comment

0 Comments