When Form Follows Nothing

Why the rise of AI demands we rethink how workplace systems are designed, not just how tools are used.

Most organizations won’t realize they’ve built an AI system until it’s already in place. Not because they developed a clear strategy, but because the tools quietly embed themselves, driven by curiosity and habit, long before any formal plan.

We’ve seen this up close. In one recent session, a leadership team at a mid-sized marketing agency stated that their staff was already using AI. When we asked how, they described everyday uses across documents and emails. But they hadn’t defined parameters, ownership, or even basic usage norms. No guidelines. No account management. No real understanding of how prompts work or what data flows where. They had, without meaning to, created a system with rules, behaviors, and outcomes.

That’s the danger of accidental architecture. It’s not malevolent. It’s ambient. One employee drafts internal notes with AI. Another summarizes a meeting. Someone else shares a suggested prompt during onboarding. The pattern isn’t coordinated, but it repeats. Soon, these tools have real influence. They shape communication norms, affect decision quality, and start to signal what kind of thinking is valued. But because there’s no declared structure—no blueprint—the system can’t be shared or repaired. It’s a building with no wiring diagram.

To understand how these systems take shape, we can borrow from design education, where structure precedes expression. Design in this context isn’t visual. It’s systemic. It’s about defaults, hierarchies, and the invisible scaffolding that guides interaction. Typography obeys rhythm. Layout respects flow. These aren’t just aesthetic preferences—they are systems that make meaning legible. Before you create a new typeface or redesign a page layout, you have to understand the rules of composition. How does the reader move through space? Where does the eye go first? How does the design breathe? What structures help them interpret meaning without even realizing it?

The architecture is already forming in companies, even in those who are “early” in their AI adoption. Once the system is in place, it’s much harder to rewire.

These tools don’t announce themselves. They shift the tempo of a meeting. The tone of a message. They slip into templates and into decisions until they become habits. And when left unexamined, they begin to write the system on our behalf.

One team we worked with had a staff member reprimanded for using AI to draft branded content. They didn’t know it was against policy—because no policy existed. They saw a problem, used a tool, and got punished for acting on initiative. The real issue wasn’t misconduct. It was an unacknowledged system with no rules, no guardrails, and no shared language. Moments like that don’t just affect one person. They shape how every employee thinks about innovation, risk, and trust.

When systems are unspoken, people hesitate and self-censor. They move as if in a fog. Decisions are made in silos with invisible reasoning. What follows isn’t disorganization—it’s disorientation. Not just a lack of clarity, but a lack of coherence.

This is what we mean when we say that AI fluency isn’t a technical skill. It’s judgment in context — knowing the kind of system you’re operating in, and how your actions ripple within it.

For decision-makers, the challenge isn’t just whether to adopt AI. It’s whether they can see the system they’re already building, even if it’s not acknowledged yet. These are all architectural moves. Architecture isn’t just physical; it’s procedural—each AI-inflected meeting, template, or autogenerated report contributes to the blueprint that shapes culture. And culture influences how people think, do, and create. These are design acts, whether they’re recognized as such or not. Without an intentional system, decision-makers risk reputational damage and an erosion of internal trust.

We aren’t calling for a rigid blueprint. We’re calling for visibility. If a company is going to integrate generative AI, it should do so with intention—not as a set of scattered features, but as a designed layer of its work environment.

That means asking better questions. What kind of thinking do we want AI to support? What behaviors do we want to model internally? Where are the fault lines between speed and quality, between automation and ownership? And how do we train people not just to use tools, but to understand how those tools shape their output?

We train teams across sectors to view AI as a thinking partner rather than a productivity tool. A subtle shift that elevates the discussion from utility to authorship, where the outcome reflects both the action and the thinking that shaped it.

Every system sends a signal—about what matters, what’s permitted, and what gets lost. Generative AI doesn’t build those values. It reflects them. The deeper question isn’t whether we’re using the tools well. It’s whether we’re designing the systems that deserve them.

- Sarah Anderson

Co-Founder | VILAS