2 pointsby fabio_rovai12 hours ago1 comment
  • fabio_rovai12 hours ago
    Open Ontologies is an MCP server that lets LLMs build, validate, query, and govern OWL/RDF ontologies using 39 tools backed by an in-memory Oxigraph triple store, written in Rust.

    Why it exists LLMs understand ontology theory and can generate valid Turtle/OWL, but they also hallucinate hierarchies, invent properties, and produce invalid ontologies. Prompting alone doesn’t fix this—tools do.

    Open Ontologies implements a generate → validate → iterate loop: the LLM generates Turtle, tools validate it in a real triple store, lint it, query it, and fix issues. The LLM orchestrates; the tools are the source of truth.

    What it does The server exposes 39 MCP tools over JSON-RPC. Core workflow:

    validate – catch Turtle/OWL syntax errors

    load – store data in Oxigraph

    stats – sanity-check classes, properties, triples

    lint – detect missing labels, domains, ranges

    query – run SPARQL on the graph

    diff – compare ontology versions

    It also supports a Terraform-style lifecycle:

    plan – preview changes and risk

    enforce – check design pattern rules (e.g., BORO)

    apply – safe reload or migration

    monitor – SPARQL watchers with alerts

    drift – detect schema changes

    Extras include data ingestion (CSV/JSON/XML/XLSX/Parquet), SHACL validation, OWL-RL reasoning, terminology crosswalks, and ontology alignment.

    Architecture Rust

    Oxigraph (in-memory SPARQL store)

    rusqlite (state, feedback, monitoring)

    Single binary, no Python or Java

    Run:

    cargo build --release ./target/release/open-ontologies serve Connect any MCP client and the tools appear.

    Benchmarks 7.5×–1,633× faster than HermiT reasoning on LUBM scaling tests.

    On the OntoAxiom benchmark, tool-augmented workflow achieved F1 = 0.305 vs 0.197 for the best bare LLM.

    Key idea LLMs are good at understanding requirements and generating structure. Triple stores are good at validation and truth.

    The winning pattern is: LLM generates → tools verify.

    This approach likely extends beyond ontologies: in real systems, LLMs succeed not by knowing answers, but by calling the right tools.