Intro
Vanilla coding agents are optimized for shipping web apps fast, and they are great at building SaaS products.
But if you’re connecting sensors, managing edge deployments, or orchestrating hardware fleets, the default agents get completely lost.
Connected infrastructure has constraints that generic AI has never encountered during training.
Missing training data
Hardware-specific protocols
MQTT, Modbus, BACnet, LoRaWAN, Zigbee — these aren’t in the training data at scale. An agent that’s great at React components will hallucinate when building solutions using those technologies.
Deployment heterogeneity
Web apps run in the cloud. IoT workloads expand on several hardware platforms:
- ARM devices with 512MB RAM
- Industrial gateways with no outbound internet
- Edge servers with intermittent connectivity
- Cloud backends with strict latency requirements
The deployment surface is 10x more complex, and vanilla agents don’t know any of it.
Debugging requires context you can’t see
When a sensor stops reporting, the problem could be:
- Firmware crash
- Network outage
- Power supply failure
- Configuration drift
- Protocol version mismatch
A web app agent checks logs and returns stack traces. An IoT agent needs to reason about the physical state it cannot directly observe.
The solution
The fix isn’t smarter models. The fix is specific guidance and domain knowledge.
I share my notes, strategies, and reference architectures with my internal AI development team. Then I organize agents the same way I organize project teams in companies I support:
- Clear ownership boundaries - each agent owns a domain, and follows specific instructions
- Domain-specific context - notes, patterns, and reference architectures that encode institutional knowledge
- Automated Cross-check - agents challenge each other’s assumptions before code reaches my review
This setup enables LLMs to understand and operate under very specific conditions - the kind of constraints that come from years of shipping real systems, not from reading documentation.
Sample roles of Specialized Agents
Hardware Context Agent
Knows the target hardware. Understands memory constraints, GPIO mappings, and firmware update patterns. Checks that the code will actually run on the device.
Protocol Specialist Agent
Deep knowledge of industrial protocols. Knows how to design extensible and scalable MQTT Topic structure. Understands how to use X.509 Certificates as the proof of identity for devices. Won’t hallucinate when defining strict IoT Policies.
Deployment Management Agent
Handles the messy reality: OTA updates, rollback strategies, and configuration management for heterogeneous fleets. Knows the difference between “works on my machine” and “works on 10,000 deployed devices.”
Security Auditor Agent
Reviews for attack vectors specific to physical infrastructure: firmware tampering, unencrypted telemetry, supply chain vulnerabilities.
The Cross-Check Iterative Workflow
Before any code reaches my review, it’s been through this cycle:
- Hardware Agent validates constraints
- Protocol Agent verifies communication patterns
- Deployment Agent checks rollout feasibility
- Security Agent audits for vulnerabilities
- Orchestrator captures learnings and updates team memory (extremely important - that facilitates the constant improvement cycle)
This setup isn’t sequential - the agents run in parallel and challenge each other’s assumptions. The final output isn’t “the best guess of one model.” It’s the consensus of a specialized team.
The Missing Ingredient: Your Domain Knowledge
Most teams try to make this work by prompting harder. They write longer prompts, add more examples, and iterate on the wording.
Based on my experience, that’s the wrong approach.
What actually works for me and my customers:
- Capture your domain knowledge - the attempts that succeeded, the ones that failed, and the lessons learned
- Define agent boundaries - what each agent owns, what it doesn’t, what it can veto (yes, agents should be allowed to veto in areas they are specialised in)
- Share reference architectures - concrete examples that encode constraints too subtle to explain, solutions diagrams, code samples, security policy documents
Your notes, strategies, and mental models are the missing ingredient to make AI meet (and exceed) your expectations. The model can infer web app patterns from millions of examples. It can’t infer your infrastructure constraints without your help.
The Takeaway
If you’re building for physical infrastructure, stop trying to make generic agents work. They weren’t trained for your domain, and the failure modes in production are expensive.
You need:
- Context about your hardware and deployment environment
- Specialization in protocols, constraints, and failure patterns
- Coordination between agents that cross-check each other
- Organization that mirrors how you structure real project teams
Want to See This in Action?
👉 I enable AI agents to design and build connected infrastructure for hardware-enabled businesses. If you are ready to truly leverage AI in your enterprise, let’s meet to discuss and build your AI team.
