Mario Ramić of Takeaway Reality explains how AI is revolutionizing AR and VR, from enhancing AR glasses to creating more intelligent NPCs in the metaverse. AI-driven automation is helping analyze user behavior, generate adaptive environments, and improve content creation, though current generative AI tools still have limitations. Scalability remains a challenge, with many AI features being implemented just for hype rather than real user value. Ethical concerns, such as data privacy and user comfort in AI-generated environments, also need careful consideration. Looking ahead, AI’s biggest impact will be in making AR/VR experiences more interactive, intelligent, and seamlessly integrated into daily life.

What are the biggest breakthroughs AI will bring to AR/VR in the next five years?

 I believe that the integration of AI with compact AR glasses could be a game changer. For a lot of people in the industry, AR is the end game. The whole mission of this industry is to transform the way people interact with technology and the recent success of the Meta Ray Ban glasses which have AI integrations is a clear indicator that people want something light and almost seamless, once visual augmented reality gets integrated into these glasses, this will be a game changer. As for AI integration with VR, we have been pioneering the integration of AI through features such as being able to talk to an AI model of a real-life educator in Virtual Reality about the courses you are taking. On other projects, we have implemented AI to predict the behavior of a player or to analyze what users are doing. Sometimes it is even as simple as using AI to transcribe the interactions and saving it as a text output. It is all about using AI in order to achieve better results instead of forcing it into the experience because it is a trend now.

How is generative AI redefining AR/VR content creation, and what are its biggest limitations?

Not very much. Most VR development is quite specialized and complex and requires very skilled developers. They can get help with AI tools, but mostly for very small tasks, most of the development done in VR is still labor intensive. The biggest area where AI tools help us are the more traditional projects that we develop such as web development or database and admin portal creation that come alongside some of our VR projects. The biggest area where I see potential is generatively creating 3D models, but no tool on the market that we tested is even close to creating something useable, let alone something that meets our quality standards.

How can AI-driven environments dynamically change based on user actions in real time?

This can be done through processes such as procedural generation when it comes to the environment. For example, users could enter a prompt and the whole environment around them could change. It is basically like choosing a fantasy place where you want to be, it could be quite exciting in VR, but I see it more as a gimmick than something that most businesses would find useful, when it comes to games, I see a bit more usefulness for this, but it would need to be very well conceptualized so that the user doesn’t end up with a confusing mess. I am more of a fan of the artistic approach where the experience is crafted specifically to fit with the environment and feeling that you want to achieve with your game or project.

What are the biggest roadblocks in scaling AI-powered AR/VR for mainstream adoption?

The biggest roadblock is finding an use-case that is actually useful to the end user. These days we are in an AI hype cycle and many companies are integrating AI everywhere just to excite investors, but are not thinking about the user. The result of this is that in the broader tech landscape (but also in AR/VR) there are a lot of AI features that either aren’t useful to the user or don’t make sense with the broader idea of the app, but are just there to tick an AI checkbox. I think this hurts the adoption of the technology in the long term as people have already started associating AI generated content with something cheap/not useful. We have implemented AI in many projects where it made sense and made the user experience better, but have also advised clients against implementing AI when we saw that it was not necessary or useful to do so. 

What ethical concerns arise with AI-driven immersive experiences, and how should they be addressed?

A lot of the ethical considerations are the same as they are in other technology areas, such as the source of the training data of these models and the privacy of using AI tools in general, as most people use tools such as ChatGPT or other providers instead of implementing their own AI to make it safer for users privacy. In terms of using AI collaboratively within a virtual environment, there is also the question of creating environments that are uncomfortable for other players. For example, imagine you were in a shared room and someone used AI to create an environment where you are at the top of a cliff. For most people this would be ok, but if you are afraid of heights, this will be very uncomfortable for you in VR.

What role will AI play in making metaverse experiences more intelligent and interactive? 

The biggest opportunity I see here is the ability to make NPC’s much more intelligent. I think we are already at a level where you could make an avatar and have it speak to you without you realizing that it is not an actual person. Of course, you would need to be informed that you are speaking to an AI, but I think it would make some metaverse spaces much more interesting, especially during non-peak times when there are not a lot of users. 

Related Insights