The term "agent" is applied to everything from chatbots to robotic arms. This imprecision obscures a fundamental architectural distinction: automation executes predefined sequences, while agentic systems navigate uncertainty toward goals. The difference matters because it determines what problems can be solved.
Scripted Flows vs Goal-Driven Loops
Traditional automation follows scripts: Step 1, Step 2, Step 3. If Step 2 fails, halt. If the UI changes, halt. If an unexpected dialog appears, halt. The script encodes both the goal and the exact path.
Agentic execution separates goals from paths. The goal is "submit this insurance claim." The path is determined at runtime based on what's actually on screen. Dialog appeared? Dismiss it. Button moved? Find it. Field renamed? Map it.
This isn't a minor enhancement to automation. It's a different computational model — one that requires perception, state tracking, and decision-making under uncertainty.
Why Most "Agents" Stop at Planning
Current AI agents are excellent planners. They can decompose goals, sequence tasks, and generate reasonable action plans. But planning is the easy part.
The hard part is execution in environments that don't cooperate. Screens that change between observations. Actions that sometimes work and sometimes don't. States that can't be directly queried.
Most agent frameworks punt on execution. They call external tools, invoke APIs, or generate code. That works when tools exist. In enterprise environments full of legacy systems, the tools don't exist.
Why Execution Needs Perception + Control
Real agentic execution requires a tight perception-action loop. Observe the screen. Interpret the state. Decide the next action. Execute the action. Observe the result. Adjust.
This loop runs continuously — not once per "step" but potentially dozens of times per logical action. Click a button. Did it respond? Did a loading indicator appear? Did the expected screen load? Each observation triggers potential re-planning.
Perception must be robust to visual variation. Control must be precise despite system latency. State estimation must handle partial observability. These are not solved problems.
The Architectural Implication
The automation-to-agent transition isn't incremental improvement. It requires different architecture: probabilistic instead of deterministic, closed-loop instead of open-loop, adaptive instead of fixed.
Systems built for automation cannot evolve into agentic systems without fundamental redesign. The runtime assumptions are incompatible.
This is why RPA vendors struggle to become "intelligent automation" vendors. Adding AI to a deterministic runtime doesn't make it agentic. It makes it deterministic automation with AI decorations.
Key Takeaway
The automation/agentic boundary is architectural, not feature-based. Agentic execution requires perception, uncertainty handling, and closed-loop control — capabilities that cannot be retrofitted onto deterministic automation engines.