Google officially unveiled the text-to-video AI filmmaking tool, Flow, at Google I/O 2025 in May alongside the Veo 3 model. At that time, Google revealed that filmmakers had already begun using Flow to produce specific scenes for an upcoming theatrical movie. In October, Google enhanced Flow with new features and released the Veo 3.1 update. Now, in mid-November, Google has introduced four additional Flow features designed to streamline and improve the workflow for creators relying on generative AI video tools.
According to a recent blog post from Google, over 500 million videos have been created using Flow since its launch in May. Based on user feedback, creators have been requesting greater precision and control over both image and video generation within the tool. In response, the four new features officially announced on Monday — although some were already rolled out in recent weeks — provide users with enhanced creative control through refined editing capabilities.
One of the most significant new additions is native support for Nano Banana Pro, Google’s most advanced AI image generator to date. With this integration, Flow subscribers can generate and edit images directly within Flow, eliminating the need to switch to other apps like Gemini for complex image work. Free users still have access to the Imagen and Nano Banana models. This capability allows users to modify characters’ outfits, remove distracting elements, and tweak poses, camera angles, or lighting simply by issuing straightforward commands. Furthermore, creators can blend image elements with a single prompt to create consistent assets across scenes.
New editing tools in Flow now enable users to draw and annotate directly on images while using the Frames to Video feature. This functionality lets the AI interpret hand-drawn instructions alongside text prompts during video creation. Additionally, Flow supports the insertion and removal of objects in clips without altering other parts of the scene, allowing users to make fine adjustments to nearly finished videos. However, object removal remains experimental and works best for stationary or minimally moving objects. Lastly, users can adjust camera angles and motion trajectories within Flow to visualize scenes from different perspectives, particularly effective for clips without existing camera movement.
These updates collectively provide filmmakers and creators with increased flexibility and control, making the storytelling process more intuitive and efficient with generative AI tools.



