Creating Concert Visuals With AI
AI-generated video from Runway ML
AI-Generated Visuals: New Possibilities… With Caveats
AI-generated videos and images are opening new possibilities for live performers, VJs, and stage designers. You can basically just describe the image or video that you want, and the AI will create it for you. This means you can generate unique visuals that match your music or performance style without needing advanced technical skills or expensive software. For example, you could type a prompt like “a glowing neon cityscape at night with floating geometric shapes” and receive a custom video loop that fits perfectly with your electronic music set.
However, many creative professionals remain concerned about the implications of AI in the arts. Some worry that AI-generated content lacks the human touch, originality, and emotional depth that traditional art forms possess. Many are concerned that many of these models are trained on existing artworks without proper attribution or compensation for the original creators. Others worry about the environmental impact of training large AI models, which can consume significant computational resources and energy. These are valid concerns that the industry is still grappling with, and it’s important to approach AI-generated content thoughtfully and ethically.
Nonetheless, this technology is barreling forward, and it’s becoming increasingly accessible to artists and performers. AI-generated visuals can be a powerful tool for enhancing live performances, creating immersive environments, and adding a unique visual dimension to your shows. They can help you quickly generate high-quality content that aligns with your artistic vision, allowing you to focus more on the performance itself rather than spending hours creating visuals from scratch.
Why Visibox Is Ideal for AI Visuals
Visibox is performance-friendly media playback software tailored for live shows. It allows you to drag-and-drop videos, images, and even live camera feeds into a setlist, then trigger and mix those visuals during your performance. Crucially, Visibox makes it effortless to loop any content and conform it to any screen. This flexibility is perfect for incorporating AI-generated visuals, which often work well as atmospheric loops. Visibox runs on macOS/Windows and presents a clear, simple interface.
Because Visibox was designed for performers, it supports common file formats (MP4 videos, JPEG/PNG images, etc.) and has useful features like aspect ratio adjustments and projector calibration. In practice, this means any AI-generated video or image that you create can be quickly loaded into Visibox and played on stage with little to no editing required.
AI Tools for Generating Performance Visuals
A variety of AI tools are now available to create videos and imagery for your performances. The list of services and features is ever evolving. So we will focus on the most prominent and accessible options as of mid-2025, which are suitable for generating visuals that can be used in Visibox.
OpenAI Sora
Sora is OpenAI’s cutting-edge text-to-video model. Given a text prompt (and optionally an image or video input), Sora generates a short video clip that attempts to match the described scene. This model gained notoriety for its realistic detail, camera dynamics, and strong prompt adherence. If you pay enough money, Sora can generate clips up to about 20 seconds long and currently supports resolutions up to 1080p in its newest version. Users can specify different aspect ratios (widescreen, vertical, or square), and even supply their own starting assets to have Sora extend or remix them – a powerful feature for creatives wanting to animate specific images or video snippets. Sora also has tool to seamlessly loop any video, which is perfect for live performance visuals that need to play continuously without jarring cuts.
If you're paying for an Open AI ChatGPT plan, you've already got access to Sora. It’s included in the Plus subscription, which costs $20/month as of mid-2025. However, for higher resolution and longer videos, you’ll need the Pro plan, which costs a whopping $200/month. More at https://openai.com/sora/.
Google Veo
Veo is Google’s generative video model (part of the Google Gemini AI initiative) and a strong competitor to Sora. Veo is designed to produce cinematic-quality videos from text prompts, and it notably can generate native audio along with the video – meaning it can make characters speak, add sound effects, or music that sync with the visuals. For live performance visuals, you might not need audio, but this indicates the level of sophistication in Veo’s outputs. With Google's $250/mo AI Ultra plan, Veo can create longer clips (beyond 1 minute) and at least 1080p HD resolution, with fine control over visual style, camera movement, and lighting. Google has emphasized Veo’s enhanced realism and physics (the model handles complex motion more believably) and the ability to interpret nuanced, cinematic prompts. Essentially, Veo aims to give creators “exceptional creative control” for professional-grade content – potentially very useful if you want highly polished backdrop visuals or even narrative video segments in a performance. If you've got a paid Google plan, you've already got access to a more stripped-down version of Veo. More at https://deepmind.google/models/veo/
Runway ML (Gen-2 / Gen-3)
Runway is an AI creativity platform that is popular among artists for its user-friendly interface and versatile tools. It offers text-to-video generation, image-to-video animation, and AI-powered editing features like looping and selective motion brushes. Runway’s Gen-3 model excels at animating still images, allowing users to bring static visuals like show posters or logos to life with smooth camera movements or subtle animations. Affordable pricing starts at $15/month, with free options available for experimenting. Outputs are silent videos, ideal for performance visuals, and its built-in looping tools make it easy to create seamless clips for use in Visibox. More from Runway at https://runwayml.com/
Microsoft Bing Video Creator
Microsoft’s Bing Video Creator is the newest entrant (launched in June 2025) and is essentially Microsoft’s interface to OpenAI’s Sora model. We already touched on it under Sora, but to recap its unique place: it’s a free, mobile-based tool that turns text prompts into very short videos. The convenience factor is high – if you have the Bing app on your phone, you can simply type “Create a video of …” and within a minute get an AI-generated clip. For example, you might type “a pulsating neon fractal loop” and receive a 5-second animated fractal video. The tool currently outputs vertical videos (9:16) only, since it’s geared toward mobile/social sharing, though Microsoft has stated that horizontal (16:9) support is on the way. Also, generation lengths are fixed at 5 seconds for now. More at https://www.bing.com/images/create?ctype=video
Other Emerging Tools
Adobe is starting to work AI into its Creative Cloud suite, with tools like Adobe Firefly for image generation and Adobe Sensei for video editing. While these are not strictly text-to-video, they can be useful for creating assets that you can then animate or edit in Visibox. Adobe’s tools are subscription-based, so they might not be as accessible as the others listed here.
Midjourney is another popular AI image generator that has recently added video capabilities. It allows users to create short video clips from text prompts, but the focus is still primarily on static images. However, you can use Midjourney to generate high-quality stills that can be animated later in other tools.
Stable Diffusion is an open-source AI model that can generate images and videos from text prompts. It’s not as user-friendly as some of the other options, but it’s a powerful tool for those who want more control over the generation process. There are also various web interfaces built on top of Stable Diffusion that make it easier to use.
In short, the ecosystem of AI creative tools is growing fast. New text-to-video models are coming out frequently, so keep an eye on AI news. The good news is that the general techniques you learn – how to craft prompts, how to choose or prepare input images, and how to integrate the content into Visibox – will apply even as specific tools come and go. Focus on the core concepts and you’ll be able to adapt to whatever new generator arrives next.
Copyright & Licensing
The laws around AI-generated content are still evolving, but here are some key points to keep in mind when using AI visuals in your performances:
- AI Content Cannot Be Copyrighted: In most jurisdictions, you cannot copyright content that is generated entirely by an AI. This means if you use an AI tool to create a video or image without any modifications or creative direction from you, that content likely cannot be copyrighted. However, if you add your own creative elements (like editing, remixing, or combining multiple AI outputs), you can claim copyright on the final product. Note that this also means that any AI-generated content you find on the Internet is likely not protected by copyright, so you can use it freely as long as you don’t claim it as your own original work.
- Licensing: Generally, you own the rights to the content generated by AI tools, as long as you have a license to use that tool. For example, if you create a video using OpenAI’s Sora through your ChatGPT subscription, you retain ownership of that video. However, always check the specific terms of service for the tool you’re using, as they may have different rules. Likewise, since this content may not be copyrightable, any licenses may not be enforceable.
Creating “Vibey” Visuals and Writing Effective Prompts
One of the main use cases for AI in performance visuals is generating ambient, looping backgrounds – the kind of “vibey” visuals that enhance the mood without stealing the spotlight. Examples might include abstract patterns, surreal landscapes, cosmic scenes, neon geometric designs, slow-motion nature imagery, etc. To get these from an AI, the prompt you write is key. Here are some tips for crafting prompts that yield great performance visuals:
- Make Your Short Videos Long - While the tools to generate AI videos often default to short clips (5-10 seconds), there are several tools out there to upscale or stretch videos smoothly. Use the phrase "fast motion," "high speed," or "time lapse," in your prompt to encourage the AI to squeeze a lot of action into a short clip. Then you can use a video editor such as Adobe Premiere, DaVinci Resolve, or Final Cut Pro to stretch the video out to a longer duration. Enable "smoothing" or "frame interpolation" to create a longer video that still feels fluid. This way, you can take a 10-second clip and stretch it to 30 seconds or more without losing the visual interest.
- Create Seamless Loops - Many AI tools can generate videos that loop seamlessly, which is perfect for live performance. To achieve this, include terms like “seamless loop”, “infinite loop”, or “continuous motion” in your prompt. This signals to the AI that you want a video that can play repeatedly without noticeable jumps or cuts. For example, “a seamless loop of glowing particles drifting through a dark void” will likely yield a video that flows smoothly when played on repeat. Some of the AI tools (such as Sora) have tools which will allow you to upload an existing video and then generate a new video that loops seamlessly. This is a great way to take an existing clip and make it suitable for live performance. Another technique is to bring your video clip into a video editor, split it in half, move the first half to the end and create a cross-fade transition between the two halves. This will create a seamless loop that can be played indefinitely without any jarring cuts.
- Describe the Mood and Style: Since you likely want something that complements your music’s mood, include adjectives about atmosphere (e.g. “dreamy vaporwave cityscape at night, with neon lights blinking softly” or “soothing underwater scene with gentle floating particles, loop”). Mentioning an art or film style can also guide the model (e.g. “in the style of a 70s psychedelic animation” or “cinematic drone shot”).
- Keep it Continuous: For background loops, you generally want continuous movement or evolving visuals, not a lot of hard cuts or scene changes. You can prompt for that by emphasizing a single scene or a single subject with ongoing action. For instance, “a single camera shot orbiting around a glowing crystal” will likely produce a continuous rotating view, whereas asking for “multiple different scenes” could confuse the AI into a non-loopable result. Helpful phrases here might be: “fixed camera,” “gentle movement,” or “gradual transition.”
- Be Specific with Motion: If you have a particular movement in mind (camera pan, zoom, objects moving in a direction), describe it. “Camera slowly zooms into the forest” or “stars drifting upwards” can lead the AI to produce that motion. This is helpful to avoid erratic or fast motions that might be jarring on a big screen. Smooth, slow movements often work best for live visuals (they’re less distracting yet still engaging).
- Iterate and Refine: Prompting is part art, part science. Don’t hesitate to run multiple generations and refine your language. If the colors are off, add something like “blue and purple color scheme.” If it’s too chaotic, request “minimalist” or “simple background.” Many AI generators also have variations or re-roll options – use these to your advantage to get a version that feels right.
Finally, for image-generation prompts (if you plan to create still images to use as-is or to animate later), similar rules apply. You might generate a high-resolution image of a fantasy landscape or a fractal design as a backdrop. In those prompts, focus on style keywords (e.g. “synthwave skyline, dark purple and pink, with gridlines”) and use aspect ratio tags if available (Midjourney and others allow specifying aspect ratio – e.g. --ar 16:9
– to get a wide image suitable for a 16:9 screen). Once you have images, you can import them into Visibox as static backgrounds or run them through an animation tool as described in the next section.
Making AI Videos Loop Seamlessly
By default, Visibox will loop videos (it has settings for whether a clip plays once or repeats), so it’s up to you to provide content that loops smoothly or to adjust it so that it does. A seamless loop means when the video reaches the end and jumps back to the beginning, the transition isn’t jarring – the motion appears continuous. Here are some techniques to achieve that:
Use Built-in Loop Features: Some AI tools help create loops out-of-the-box. As mentioned, Runway and a few others allow you to generate a looping video by coordinating the start and end frames. If the tool has a “loop” option or guidance, take advantage of it. You can also bring existing clips into Sora (even those created by another tool) and loop them.
Manual Looping via Editing: If your AI-generated clip isn’t a perfect loop, you can fix this with a bit of editing trickery. One common method is the “ping-pong” loop – you play the video forward, then in reverse, and then repeat. This can turn any clip into a loop because it eliminates the jump; however, note that the motion will reverse (which may look odd for some content, but for abstract visuals, it often looks fine). Another method is a crossfade: take the last few seconds of the video and overlap it onto the first few seconds with a dissolve transition in a video editor. This blends the start and end, smoothing the cut. You may need to sacrifice a bit of footage to do this, but if the content is ambient, viewers won’t notice the overlap if done subtly.
Plan for Looping Content: When writing your prompt or choosing what to generate, consider content that naturally loops. For instance, a rotating object, a seamless pattern, waves rolling continuously, or a pulsing light – these all look like they’re inherently never-ending. Avoid prompts that have a defined beginning or conclusion (e.g., “a rocket launches and explodes” has a clear end event, making it unsuitable for looping). Instead, something like “a rocket endlessly orbiting around a planet” would be more loop-friendly.
Remember that even a not-quite-perfect loop may be just fine in the context of the peformance. Just experment and see what feels right.
Animating Still Images and Show Posters
Live performances often have associated graphics – album art, logos, tour posters, etc. Rather than just showing these as static slides, you can use AI tools to animate them and create short videos that still retain the original look. This can create a professional, branded visual that elevates your show. Here are some approaches to animating still images:
Runway Motion Brushes: As discussed under Runway, one straightforward method is using a selective animation too.. If you have an image (say your album cover art) with some natural element in it (clouds, fire, water, etc.), you can load it into Runway and use the motion brush to make that element move while leaving everything else untouched. For example, if your poster has a sky with clouds, animate the clouds drifting. If it’s just a graphic logo, you could add a subtle glow or shake effect to it using keyframes in a video editor, but AI tools like Runway can also apply filters or AI effects to simulate camera shake or breathing motion on static content.
Image-to-Video Generation: Leverage the image-to-video features in tools like Runway Gen-3 or Luma. You’ll upload the still image as a starting frame. Then you craft a prompt that describes how the image should come alive. A technique is to imagine a small camera movement within that image: e.g. “slowly pan across this scene” or “a gentle zoom out revealing the full poster, with the text floating above a moving background”. Runway will generate a short video that begins exactly on your image and then moves as instructed. The result is often that the key elements of your original image stay recognizable (the essence and design are intact), but you gain motion – which could be moving light and shadows, parallax depth effect, or slight refocusing. Runway was noted to handle these camera prompts well, producing high-quality animations from static inputs. Similarly, other tools like Luma’s Dream or Hailuo MiniMax have rolled out image-to-video capabilities that do something akin to 2.5D animation of a still. If one tool doesn’t give you the desired result, try another – each AI has its own style, and one might preserve your image’s “essence” better than another.
Resolution Considerations for Projectors and Video Walls
When generating AI content, you’ll have options for resolution. It’s tempting to think “higher is better,” but in live shows, 1080p (Full HD) or even 720p (HD) is usually sufficient for visuals. Many video generation tools will have limitations on resolution. You can usually upscale these videos in the same or another tool. Topaz Labs has some great tools for this purpose. But more and more, these types of tools are being built into video editors such as Adobe Premeire and Apple Final Cut. Also, most video projectors and video walls are lower resolution than you might think – so 1080 or 720 video may work just fine as is.
Embracing Flexibility and Creativity
Using AI with Visibox can elevate live shows for VJs and musicians alike. Stay flexible and creative—these tools amplify your imagination. Start with simple experiments like background loops or logo animations, then progress to complex visuals or live-reactive shapes.
Tech is evolving fast, but core skills—prompt writing, visual themes, and show programming—remain valuable. Focus on fun and creativity: AI visuals can depict scenes beyond filming or respond to your music themes. Whether beginner or pro, tools like Bing Creator, Sora, Veo, or Runway help expand your stage palette.
Combine AI with Visibox for immersive, original performances. Embrace experimentation and enjoy pushing your live shows to new artistic heights.