In December, our team at Nansen spent a full day with Optimizely inside Opal — their AI platform built for digital experience teams.
It wasn’t a typical demo. We weren’t watching slides. We were building.
What struck me most wasn’t a single feature. It was how much of Opal is about structure.
AI tools are easy to try. Structuring them so they scale inside real programs is harder. That’s the difference Opal is designed to address.
Starting at the Instruction Layer
Before we touched custom agents, we started with instructions.
Instructions operate at the system level. They define voice, tone, compliance guardrails, formatting standards, and contextual requirements. They can apply broadly or be scoped to specific prompts.
Working through this layer reinforced something important: output quality follows configuration. When standards are defined deliberately, results stay consistent. When they’re loosely set, variability creeps in quickly.
Managing this centrally creates alignment across teams before any content or experimentation begins. That early discipline matters more than most organizations expect.
Extending Capability Through Tools
Next, we worked through tools.
Tools allow agents to connect to external systems — querying databases, calling APIs, performing calculations. Instead of relying only on what’s in the prompt, agents can interact with live data.
This changes the equation.
AI becomes more useful when it engages with real systems. Content can reflect dynamic inputs. Experimentation workflows can generate structured outputs tied to actual performance data. Automation moves beyond drafting and into execution.
The integration layer determines how practical AI becomes inside an organization.
Building Custom Agents
From there, we moved into custom agents.
Opal allows you to define localized prompt templates, set variables, adjust reasoning depth and creativity, connect tools, and structure outputs for delivery inside Optimizely products. Outputs can be text, visual assets, or web-ready content.
What stood out to me was the level of control available without introducing unnecessary complexity.
Instead of relying on ad hoc prompts, teams can formalize repeatable capabilities — campaign drafting, SEO analysis, structured reporting summaries, image generation — each aligned to predefined standards.
That structure creates leverage.
Coordinating Multi-Step Workflows
Many enterprise tasks involve more than one step. A blog initiative might require research synthesis, outline development, headline refinement, image generation, formatting, and publishing preparation.
Workflow agents coordinate those stages.
Because workflows are composed of other agents, functionality can be reused and sequenced logically. Conditional logic routes data between steps. Scheduling can be embedded directly into pipelines. Each stage remains parameterized, preserving consistency across executions.
During the session, we built layered examples to test how far this composability could go. One colleague created a workflow that retrieved the International Space Station’s live coordinates and generated a mapped visualization from that data. The example was playful, but it demonstrated something serious: structured agents can compound.
The architecture holds.
The Broader Lesson
By the end of the day, the takeaway wasn’t about individual features.
It was about discipline.
Instructions define standards. Tools extend capability. Custom agents formalize tasks. Workflows coordinate complexity. When these components are designed deliberately, AI operates inside governed digital programs rather than alongside them.
For organizations balancing experimentation velocity with compliance and brand integrity, structure determines sustainability.
Scaling without architectural discipline introduces drift. Scaling with defined parameters preserves alignment.
From Exposure to Execution
Our team left with Opal Administrator Certifications and, more importantly, a shared understanding of how Opal should be implemented inside enterprise environments.
Successful AI integration depends less on experimentation and more on configuration, oversight, and alignment with experimentation strategy.
Opal provides the control required to support disciplined execution within the Optimizely ecosystem.
If you’re exploring how Opal could fit into your digital experience programs, we’d be glad to share what we learned and help you structure it correctly from the start.
Schedule a workshop to move from exposure to operational deployment.









