Universities are increasingly integrating AI tools for tutoring, student support, assessment, and administrative tasks. These implementations show vendors must better align with institutional requirements for data privacy, customisation, governance, and academic quality.

For example, in April 2025, Northeastern University entered a strategic partnership with Anthropic to roll out Claude for Education campus-wide to around 49,000 students, faculty, and staff over its 13 campuses. The deployment includes a “Learning Mode” that guides students toward solutions rather than simply providing them. Northeastern also states that inputs submitted to Claude are not used to train Anthropic’s public models.

Another example is Syracuse University, which announced in September 2025 that its entire university community would gain access to Claude for Education. The rollout begins 24 September, 2025, including workshops and training resources to help students, faculty, and staff use the platform responsibly and in alignment with academic integrity.

How vendors can earn trust and scale responsibly

Vendors must provide clear privacy guarantees, specifying whether student or staff inputs will be used to train external models, how long data is retained, and the accessibility of that data. Northeastern’s agreement with Anthropic makes it clear that the university’s inputs into Claude are not used to train public-facing models.

Vendors should also build tools that institutions can configure, including permissions, disclosures, metadata, and declarations of AI usage, so they align with institution-specific policies and culture. In the University of Michigan’s ‘AI Policy Module 2.0‘ pilot during Autumn 2024, students in computer science courses completed policy-memo-style writing assignments that linked technical AI topics from their coursework, such as regulation, accountability, and bias, to real ethical and governance issues. These assignments also required students to justify specific recommendations, using evidence and concepts from the module. Following the module, students reported heightened concern about AI’s ethical impact and increased confidence in discussing AI regulation and policy.

Supporting readiness is also essential. For example, Northeastern offers training workshops and guidance on secure use of Claude, including what data may or may not be shared and how to use Learning Mode. Vendors must design tools that preserve academic integrity, features such as Learning Mode, which prompt students to think critically, not just provide answers, and faculty oversight features.

GlobalData Strategic Intelligence

US Tariffs are shifting - will you react or anticipate?

Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.

By GlobalData

Strong vendor relationships come when institutions treat vendors as partners, not just vendors. At Northeastern, Anthropic is co-designing tools, practices, and responsible AI policies with faculty, staff, and students. This partnership includes jointly developing AI-powered tools that reflect educational needs, not just technical capability.

AI has great potential in higher education, but efficiency or novelty alone are not sufficient. Universities now expect vendors to deliver solutions that are ethical, transparent, customisable, aligned with strong teaching practices and academic integrity, and developed in partnership with the education community. Tools meeting these standards will not just be adopted; they will help shape a future of higher education that serves students, faculty, and the core mission of knowledge and intellectual growth.