
As life-science organizations race to adopt (generative) AI, one point begins to stand out: your AI is only as good as your data. While large language models (LLMs) offer powerful capabilities, they’re not tailored to specialized scientific data—and do need a solid data foundation. Making data Findable, Accessible, Interoperable, and Reusable (FAIR) enables AI systems to deliver more accurate, reliable, and cost-effective outcomes.
Key points include:
- Why many AI projects are still fundamentally reliant on robust data management
- How FAIR Data complements LLMs through explicit semantics and structure
- The critical role of data quality and governance in AI success
Whether you’re a data steward, scientist, or innovation leader, this session will help you get more perspective aligning your data and AI strategies for maximum impact. Join us on May 21 to explore why durable AI strategy needs a robust data strategy including FAIR Principles. Don’t let unstructured data hold your AI back—make it FAIR.
Speakers
- Angelika Fuchs, Roche, Chapter Lead, Data Products & Platforms
- Martin Robbins, Ontoforce, Product Manager
- Tom Plasterer, Xpontentl, Managing Director, Knowledge Graph & FAIR Data Capability
- Ted Slater, EPAM, Managing Principal, Scientific Informatics Consulting
Hosted by Giovanni Nisato, Project Manager, Pistoia Alliance
Register below