
Goodfire, a company focused on AI interpretability research, has raised $50m in a Series A funding round to enhance AI interpretability research and develop its Ember platform.
Led by Menlo Ventures, the funding round saw participation from Anthropic, Lightspeed Venture Partners, B Capital, Work-Bench, Wing, South Park Commons, and other investors.
Goodfire plans to use the funding to scale its research activities and further develop Ember, its core interpretability platform.
Ember is designed to give users access to the internal mechanisms of neural networks, aiming to make these systems more understandable and controllable.
Menlo Ventures investor Deedy Das said: “AI models are notoriously nondeterministic black boxes.
“Goodfire’s world-class team—drawn from OpenAI and Google DeepMind—is cracking open that box to help enterprises truly understand, guide, and control their AI systems.”

US Tariffs are shifting - will you react or anticipate?
Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.
By GlobalDataGoodfire is focusing on mechanistic interpretability research, which focuses on understanding and reverse engineering neural networks.
The Ember platform is designed to decode the neural processes within an AI model, offering direct, programmable access to its internal workings.
By going beyond traditional black-box inputs and outputs, Ember opens up possibilities for applying, training, and aligning AI models. This enables users to uncover hidden insights, precisely control the model’s behaviors, and enhance its overall performance.
Goodfire co-founder and CEO Eric Ho said: “Our vision is to build tools to make neural networks easy to understand, design, and fix from the inside out. This technology is critical for building the next frontier of safe and powerful foundation models.”
Anthropic CEO and co-founder Dario Amodei said: “As AI capabilities advance, our ability to understand these systems must keep pace.
“Our investment in Goodfire reflects our belief that mechanistic interpretability is among the best bets to help us transform black-box neural networks into understandable, steerable systems—a critical foundation for the responsible development of powerful AI.”
Goodfire said it is advancing its interpretability research through strategic collaborations with leading model developers.
Furthermore, the firm intends to release additional research previews, to demonstrate interpretability techniques in areas including language models, image processing, and scientific modelling.
These initiatives are set to uncover new scientific insights and transform our understanding of how to interact with and harness the potential of AI models.