3D Design Bureau

Testing new artificial intelligence tools in architectural visualisation

Generating 3D Renderings from Revit Models | Contextualising Characters in Renderings | Transforming Still Imagery into Short Videos | What Does the Future Hold?

Artificial intelligence is transforming the AEC industry, with applications spanning from digital modelling to post-production in 3D imagery. Our team has been exploring these tools for over a year, applying them across various stages of our workflow. In this article, we share our experiences, insights, and takeaways.

Generating 3D Renderings from Revit Models

Our team tested Veras, a Revit plugin developed by EvolveLAB, which uses AI to generate renderings directly from 3D Revit models. By processing text prompts, Veras allows users to incorporate elements such as textures, natural settings, and atmospheric effects into their renders.

AI renderings using Revit plugin Veras
AI renderings using the Revit plugin Veras

The tool is promising, but it is not precise. For example, at the time of our testing, Veras didn’t recognise specific Revit categories, such as walls or windows, processing only general volumes and masses. As a result, the closer the view is set to the mass, the better the results you can achieve, making it a useful tool for interior renders, for example.

The results we got were mixed, but we were still impressed by them. This tool can serve as a powerful resource for designers during the concept stage, providing ample support for creativity and inspiration.

AI renderings using Revit plugin Veras
AI renderings using the Revit plugin Veras

Contextualising Characters in Renderings

We also tested Adobe Photoshop’s Generative AI tool to help integrate characters into our images. This AI tool enables adjustments to clothing and appearance, making it easier to match character elements to a CGI’s overall look and feel. If the team is working on a winter render, for example, they can easily adapt the clothing of the image population, giving more flexibility in the use of a character library.

Achieving the below outcome required numerous attempts due to the randomness of each prompt, showing a consistency challenge. For this tool, we found that the more specific we were, using clear nouns and adjectives, the better results we achieved.

We also experimented with Photoshop’s AI tool in a recent commercial project in Germany. This time, we used it specifically to enhance the asset population in the image.

By initially adding 3D people and refining them in Photoshop during post-production, our team achieved some impressive results. However, AI’s unpredictability and inconsistency can make reaching the desired outcome more time-consuming.

We also used AI to generate the background seen through the windows, with more efficient effects. By adding specific details to the prompts, we were able to achieve the desired outcome in less time.

Computer generated imagery with added humans using artificial intelligence tools in Adobe Photoshop
Computer generated imagery with added humans using artificial intelligence tools in Adobe Photoshop

Transforming Still Imagery into Short Videos

The first software tested to explore AI-powered animation was Runway Gen-2. Once again, the lack of consistency was an obstacle. The tool performed best when applied to subtle camera movements and natural elements like vegetation and sky. However, when applied to characters and when using dynamic camera movements, the results were less consistent and often negative. In some cases, objects transformed into people, or vice versa, creating unintended distortions.

Additionally, the video duration was limited, making it difficult to create longer animated sequences. Despite these limitations, we see the potential for Runway Gen-2 as a tool for generating quick previews or experimental marketing materials.

Most recently, we also tested Sora by OpenAI for a similar purpose. Sora proved to be the most advanced tool we’ve tested so far, capable of adding slight camera movements and even populating a scene. However, achieving consistent results remained difficult, particularly when attempting to add people into the renders.

The highlight of this test was Sora’s ability to extend interior spaces. The AI maintained design continuity while generating virtual spaces that did not exist in the original render, delivering impressive results. However, users have limited control over the final outcome. Since each generated video is unique and cannot be replicated, this poses difficulties in using the tool for client-facing projects where predictability and accuracy are essential.

What Does the Future Hold?

Since we started testing AI tools in our workflows, the technology has evolved rapidly. Initially met with scepticism, these tools are now being increasingly adopted into professional workflows. A study by Savills highlights that AI implementation is accelerating across European businesses, “ […] On average, 8% of European companies had adopted at least one AI application in 2023, with the UK and Northern Europe leading the way.”

Moreover, in a poll we conducted in our socials, 67% of people stated that they haven’t used AI on their projects, but they are willing to learn more about it, while 17% voted that they used it more than once.  

Here at 3DDB, we will continue monitoring AI developments in the AEC industry, testing new tools, and sharing our findings. Stay tuned!


Lucas Imbimbo, Digital Marketing Specialist at 3D Design Bureau

Author:

Lucas Imbimbo
Digital Marketing Specialist
at 3D Design Bureau
lucas@3ddesighbureau.com