Borja Zamora of Zamora Design explains how AI has become an integral part of their branding and presentation design process. Their AI-first approach enhances efficiency in strategy work, including market research, competitive analysis, and keyword identification, while its role in design remains more limited to generating stock images and specific visual elements. Zamora incorporates tools like Midjourney, Leonardo, FLUX, and various LLMs to automate workflows, personalize content, and streamline storytelling. Despite AI’s limitations in fine-tuning design details, the team uses it as an ideation tool, refining outputs manually to meet client expectations. Looking ahead, Zamora predicts AI will increasingly handle lower-design tasks before gradually transforming all aspects of the design industry, shifting designers into roles focused on guiding AI-generated outputs.
How has AI influenced the way you approach presentation and branding design?
Zamora Design is AI-first. Everything we do, we try to use AI first.
It never works the first time, so we correct and refine our workflows, documenting the process as we go. We also categorize and save the most effective prompts in our content repository.
AI hasn’t replaced anyone on our team, but it has greatly improved the speed and quality of all our strategy work, including market research, competitive analysis, keyword identification, and brand trait development. Its impact on our design work has been more limited, typically focused on generating stock images or very specific visual elements that are otherwise difficult or impossible to find.
What AI tools do you incorporate into your workflow, and how have they improved efficiency?
I use Midjourney, Leonardo, and FLUX for image generation, Logo Diffusion for logo ideation, and Jitter for animated mockups and video. For LLMs, we use ChatGPT, Perplexity, Gemini, and occasionally Claude, depending on specific business needs and workflows.
LLMs primarily act as expert advisors and creative partners. Almost agents already. Image generators have allowed us to rapidly produce consistent stock imagery, or visuals of highly specific elements that were previously hard or impossible to find. For example, I recently needed 3D illustrations of different parts of a nuclear reactor… good luck building that from scratch. AI did it in minutes.

Have you developed any AI-driven solutions for automating design tasks?
I use extensive, multi-step automations primarily for writing my blog and our client proposals. These workflows typically seek my approval every 1-2 steps to ensure content remains within specific quality standards.
I’ve experimented with Zapier and Make.com for more complex automations involving multiple tools but I’ve failed to achieve perfect results. I’ve also created simpler automations, such as chatbots for customer service or internal questions, but found these were overkill for our size and needs.
How do you use AI to enhance storytelling and user engagement in presentations?
We start by training an LLM to become an expert on a particular trend, industry, or client. Once this is achieved, we equip it with the desired personality and tone of voice to closely mimic how our client communicates. We refine and teach the AI until the results match our expectations. This involves sophisticated prompts that we’ve been refining over months and saving in our content repository, documenting iterative improvements version by version.
Can AI personalize design elements based on audience preferences or business needs?
Absolutely. Once we’ve project-trained an AI, we generate multiple versions of the same narrative. Next, we train a second, “objective” AI to behave like a specific audience, complete with distinct needs, wants, dreams, fears, personality traits, hobbies, etc.
This “audience” AI then evaluates the narrative from its perspective, helping us identify what works and what doesn’t, allowing significant personalization. This approach has reduced our research time by approximately 70-80%, decreasing the need for expert interviews and extensive feedback rounds.
What are the biggest limitations of AI in creative design, and how do you overcome them?
The biggest limitation is that AI struggles with fine-tuning, which is needed 100% of the time with client work. Colors will always be slightly off, fonts will be slightly too small, and fingers slightly too many. Currently, AI serves as an ideation tool rather than a final solution. We overcome these limitations by using AI-generated assets as a starting point, refining them either from scratch or exporting and manually editing the assets.
How do you see AI evolving in the design industry over the next five years?
AI will undoubtedly improve significantly. Initially, it will replace tasks with “lower design needs”, such as internal all-hands decks or newsletters. Eventually, it’ll expand to all aspects of design, and designers’ roles will shift toward guiding AI outputs—converting client needs into design prompts, managing projects, evaluating results against project briefs and, of course, QA-ing the final designs.