I like FAQs, why is that?
In 2023, I gave a talk in Amsterdam about FAQs, plain and simple FAQs. The distinction I wanted to make: a visually-formatted FAQ (H1 question, paragraph answer, styled nicely) versus a semantically-modelled FAQ in a headless CMS.
The visual, wysiwyg approach gives you a page. The semantic approach gives you knowledge.
The conceptual use case was about reducing call centre load. FAQs exist to deflect questions from support teams. But if your content is locked into a webpage, that's the only place it works.

For context: with structured content in Contentful, I powered a Next.js FAQ page, a WhatsApp chatbot via Twilio, and Alexa Skills for voice queries. Same information, three channels, zero duplication.
To me, that's treating content as knowledge rather than page elements.
Ontology is having its moment
The other day I read a nice little article on LinkedIn by Tony Seale titled "Ontology is having its moment." It got me thinking again about those concepts I briefly looked into back in 2017.
Seale is a knowledge graph architect who spent a decade at institutions like UBS. He's one of the clearer voices on this topic and worth following if you're curious to dig deeper.
His argument goes like this: foundational AI models know what everyone knows. Your differentiation lies in the knowledge that's yours alone: the domain-specific concepts and relationships only your organisation possesses. When you formalise this as an ontology, you create a semantic index that can guide AI retrieval, validate outputs, and prevent reasoning failures.
What stuck with me: if you've been treating content modelling as a chore before the "real" development starts, you might be missing an opportunity.
This connects to work we already do
If ontology sounds abstract, bear with me. We've actually been doing related work for years under different names.
Domain-Driven Design and ontologies share deep conceptual kinship. Both start from the same premise: before you build anything useful, you need to agree on what things are and how they relate. In DDD, that's the Ubiquitous Language captured in entities, value objects, and bounded contexts. In RDF/OWL, you formalise domain vocabulary into classes and properties with explicit semantics.
The Semantic Web isn't inherently complex. The Semantic Web language, at its heart, is very, very simple. It's just about the relationships between things.
Tim Berners-LeeInventor of the World Wide Web
Think about it. We've been making ERD diagrams for content models for years. Running domain exploration workshops. Doing event storming to define entities. The connection to formal ontology is closer than it appears.
In composable architecture, this surfaces when modelling content types across systems. A Product in the Commerce engine, the CMS, and a PIM are different bounded-context views of the same real-world concept. An ontology layer, even informally using shared JSON-LD terms, does the integration work that DDD's context mapping describes.
The difference now is that AI systems can actually use this structure.
Headless CMS platforms are partially there
Modern headless CMS platforms already provide building blocks for semantic modelling, even if they don't call it that.
Taxonomies and tags give you controlled vocabularies. References between content types create relationships. Some platforms offer graph-style query languages that let you traverse those relationships in ways traditional REST APIs can't. Validation rules enforce consistency. Multi-reference fields hint at many-to-many relationships.
These features exist. But they're often used pragmatically rather than semantically. A reference field might link a blog post to an author simply because the frontend needs to display the author's name. That's not the same as declaring that "authored by" is a meaningful relationship with specific properties.
The gap is intentionality. Most content models describe what editors can enter, not what things mean. You get structure, but not semantics. The CMS doesn't know that "colour" is aesthetic while "size" determines fit and returns policy. That distinction matters for AI-powered recommendations, search, and customer service automation.
We've seen what happens when you close that gap. More than a year ago, we helped citizenM (now part of Marriott) restructure their content model around semantic concepts: hotels, rooms, amenities, services. Not pages. The same content suddenly powered their website, guest app, and in-room tablets. 85% reuse across channels, that's the payoff of modelling your domain rather than your layouts.
Closing that gap doesn't require switching platforms. It requires thinking differently about content modelling from the start.
The schema.org pattern
Seale proposes mirroring internally what Google achieved externally. Google incentivised websites to embed structured data by offering better rankings. The result: JSON-LD is now used on roughly 53% of all websites (source: w3techs), with over 45 million domains implementing Schema.org. That's distributed integration at massive scale.
For enterprises, the idea is to create your internal "schema.you.org" extending public standards. Application teams publish pre-integrated data. A decentralised mesh where services expose JSON-LD, and a central catalogue aggregates without transforming.
Look, you can't hire enough data engineers to integrate everything centrally. But you can distribute that work if you get the standards right.
From vector search to knowledge graphs
Here's where my 2017 intuition finally makes sense to me.
The trajectory of enterprise AI points toward GraphRAG, graph-based retrieval augmented generation. Rather than dumping documents into vector databases, GraphRAG builds knowledge graphs from your content. AI traverses relationships rather than matching similarity.
The catch: GraphRAG is only as good as your underlying structure.
Auto-discovery of graph structures sounds appealing, but it falls apart quickly. Take a product catalogue: without explicit modelling, an AI might infer that "colour" and "size" are equivalent attributes because they appear in similar contexts. You can't infer business logic from co-occurrence patterns.
Semantically modelled content with typed entities and explicit relationships? The knowledge graph builds itself from your CMS. Presentation-centric content with meaning implied by layout? You're looking at a remediation effort.
And there's the external visibility angle too. ChatGPT, Claude, Perplexity and Gemini all actively parse Schema Markup when generating responses. Sites with proper structured data get cited more often. Your semantic work pays off twice: internal knowledge systems and external AI search results.
In 2017, none of this existed. Now it does.
Where we go from here
When I read about ontologies in 2017, I knew they mattered but couldn't yet say why. The FAQ demo gave me a concrete example. What changed is the AI landscape: GraphRAG, RAG, fine-tuning on organisational knowledge. These all depend on structured, semantic content.
Organisations that treated content models as strategic assets find themselves prepared. Those that didn't have work to do.
The takeaway for me: your content model isn't just how editors enter data. It's how your organisation's knowledge gets compressed into structure. In the AI era, that structure matters.
We're actively exploring how to bring ontology thinking into content modelling practice. What does a semantically-aware content model look like in a headless CMS like Contentful, Sanity, or Storyblok (amongst others)? How do you balance editorial simplicity with semantic richness? Where do you draw the line between pragmatic and formal?
If you're wondering whether your content model reflects your actual domain or just your page layouts, we'd be happy to think through that with you.
Book a call with us