Many organisations are still using large language models mainly as conversational advisors. Others are making strategic shifts and using AI to drive deterministic outcomes. In between are many organisations trying to move forward while burning time, resources, patience, and sometimes reputation.
The difference often comes down to skill sets, mindsets, and workflows. The tools are powerful and improving weekly.
Hitching a Ride
To use AI effectively at the conversational level, important skills include:
Prompting well
Providing the right context
Breaking tasks into steps that fit model limitations
Choosing appropriate models
Guiding the conversation toward a useful result
Typical applications include:
Summarising or improving text
Generating images or design work
Accelerating research
Creating presentations or communication artefacts
Getting business or product advice from a “virtual board”
Most workflows here occur through chatbot interfaces such as ChatGPT, Claude, or Gemini. It’s a bit like discussing ideas with a knowledgeable colleague or delegating work to a capable intern.
Driving
At the next level, things change significantly:
Moving from advice and text transformation to action
Using agentic tools that can execute tasks: generate documents, presentations, code, tests; create files; update codebases; run tests; integrate applications; and perform transactions
Tools such as Microsoft Copilot, Claude Code / CoWork, and Gemini Enterprise support this through:
APIs for integration into applications and workflows
Desktop, browser, and mobile clients for defining and managing agents
Parallel goal-seeking agent workflows that can run for extended periods
Permissions allowing agents to interact with local systems, operate services, and perform transactions
Many of these tools rely on Model Context Protocol, which enables interoperability between AI systems and traditional software.
Skill sets for this level shift toward strategy, architecture, design, management, and quality assurance — much closer to executive and organisational capability.
It also opens the possibility of embedding AI directly into core operational capabilities, requiring careful choices about application areas, technologies, and organisational change.
Moving Beyond LLMs
Large language models are fundamentally prediction machines trained on large public corpora such as literature, the internet, and social media. Their answers reflect averages and biases in that material, and they can hallucinate — producing plausible but incorrect responses.
This can work where humans remain in the loop. It is far less suitable where reliable automation of operational tasks is required.
For deterministic outcomes — and lower compute consumption — we will increasingly need other forms of AI based on ontology and axioms.
More on this in the next post.
Read about our upcoming workshop here.
