Two out of three isn’t good enough. That’s the challenge we face any time we combine a prompt (“We Ask”), a model (“AI”), and an objective (“To Do Things”). As you adopt generative AI, you can’t afford to get only two of these elements right. You can have the right model, for the right task, but the wrong prompts could deprive your output. Aligning all three is essential.
1. Prompt + Model + Wrong Objective = 😶🌫️ Senseless automation
2. Prompt + Wrong Model + Objective = 🥸 Slow or inaccurate outputs
3. Wrong Prompt + Model + Objective = 🫣 Hoping things just work out
Prompt Engineering Is Now Part of the Code
We Ask. As someone who’s been a product manager, designer, and solutions architect, I know the value of alignment, and a well groomed backlog. I’ve witnessed code evolve from purely syntax-based instructions to something more conversational with prompting. Many of the programs I build today, are mostly code, written by AI, but the code itself contains prompts too. Writing effective prompts is the new skill. Whether we’re spinning up prototypes in Replit or orchestrating workflow logic in LangChain, we’re guiding AI with the right “ask” so we get the right results.
This shift is changing not just programming but the entire creative process. You don’t simply code an interface anymore; you also prompt it. AI enabled applications consume prompts within features, to understand better how to use that skill or feature. You describe your desired outcome—style, function, domain constraints—and let a model propose solutions. While one day many of these processes will be more integrated, keep in mind that you are the glue, and if you can automate to 80%, “take the gain and use your brain for the rest”.
Different Models, Different Purposes
AI. We have so many large language models at our disposal—OpenAI, LLaMA, Mistral, and others—each serving a unique need for scale, cost, or specialization. In my recent talks on using AI agents in business, I highlighted how Retrieval-Augmented Generation (RAG) helps these models access external data safely, expanding what they can do without losing context. Decoupling skills from the model is also a great practice, which can give smaller models without web capabilities, the ability to still access the web through a chain of processes.
There are the mainstream models we mentioned before, but there are many many other model providers. Some specialize in finance, or chat, or image creation, and more. If we’re fine-tuning for creative outputs, we might go with a certain open-source model. For enterprise-scale chat or advanced reasoning, we might choose an enterprise version of GPT-4.5 or something similarly powerful.
Perhaps most significant, your company can create your own models too. If you have enough data, you may be able to fine tune an open source model to create something new, or if you’ve got the resources, you may want to train your own models from scratch. I like to believe that in the not so distant future, we will all have our own models, which understand how we write, speak, decide, and even models that we license to others, based on how we do our work.
Human-Centered Design & Service Blueprinting
To Do Things. No matter the model, success depends on identifying real problems worth solving. That’s where the Human-Centered Design (HCD) process fits in. We observe actual user needs, blueprint services end-to-end, and then iterate with prototypes, whether they’re screen sketches, clickable prototypes, or near production AI-enabled demos. We keep real people in the loop as we refine prompts, select models, and confirm objectives.
It’s about asking good questions (“We Ask”), using the right AI or chain-of-thought approach (“AI”), and focusing on a meaningful use case (“To Do Things”). Skip any one piece, and the solution falls short. You can have the best prompts, and the best models, but if you apply them to the wrong objectives, your results can really vary.
As code becomes part-prompt and part-instruction, creative processes must adapt. We’re seeing designers, developers, and business strategists unify around an iterative AI-driven workflow. This cross-functional interplay is a big reason I advocate service blueprinting and prototyping with real users. You see firsthand where AI adds value and where the human touch remains critical.
AI Agents & Prompt Chaining
To do multiple things. AI agents go beyond single queries. By chaining prompts and tasks, we can handle complex objectives. Frameworks like LangChain allow us to script the flow: gather data, reason with it, and produce results generatively. Combine that with RAG, and we’ve got an AI enabled librarian which can retrieve and load documents on demand.
One day, we’ll all have our own personal agents AND models, fine-tuned on how we communicate, prioritize, and deliver work. That future isn’t far off. We’re already training models on brand guidelines, style preferences, and standard operating procedures. We are seeing people automate help desk tasks, SDLC backlog management, prototyping, and much more. Soon, “having an agent” will be as normal as having an email address.
prompt 1, using midjourneys default image model: Realistic film, John Waters filthy style, taken with direct flash with overexposure and hard shadows, uncle rico at his favorite diner. prompt 2, using Runway ML’s v4 video model: pov breakfast with the prom king off 1972
Conclusion
Aligning prompts, models, and objectives ensures AI truly delivers—because two out of three is never enough.
We Ask: Prompt engineering must be strategic.
AI: The model must match the task.
To Do Things: The objective has to solve a real user or business problem.
As CEO of Virgent AI, I’ve watched companies transform by respecting these three elements equally. We’re living in a moment when code is half written by us, half by AI, and soon we’ll all have personal agents guiding our workflows. If you haven’t already, please subscribe to this blog to see how we apply this thinking in real-world contexts.
It’s an exciting time. Prompt responsibly, design compassionately, and keep your objective front and center, because that’s how we harness AI for meaningful change. Interested in more content like this? Check out “Curating Agent Clusters,” and consider subscribing.