Executive Summary. France Hoang argues that AI in education must evolve from isolated tools into governed, collaborative infrastructure that institutions can oversee, audit, and align with learning outcomes.
As AI becomes embedded in higher education, institutions face a fundamental shift from adopting tools to operating AI as core infrastructure. The challenge is no longer access to models, but how to govern their use across teaching, learning, and compliance-sensitive environments.
In this conversation, BoodleBox CEO and Founder France Hoang draws on experience across national security, law, and technology to explain why AI in universities must be transparent, collaborative, and institutionally governed. He outlines a model where shared AI environments replace isolated usage, open models enable greater control, and governance frameworks ensure accountability as AI becomes integral to curriculum design and academic operations.
AITJ: France, how have your White House, military, legal, and startup experiences shaped your view of institutional responsibility with frontier tech like AI?
Every environment I’ve worked in, from overseas deployments to the White House and the courtroom, has had one thing in common: failure has real human consequences. That shapes how you think about powerful tools. You learn fast that technology doesn’t absorb accountability; it amplifies it. Leaders still own the judgment calls.
I’ve seen firsthand what happens when institutions fail – and when they stagnate. Technology can and should be disruptive – but that disruption should lead to a better world, not just a changed one.
AI is no different. You can’t hand a black box to an institution and call it governance. What’s needed is oversight that’s as sophisticated as the technology itself, with auditable workflows, clear lines of responsibility, and humans who stay genuinely in the loop. That’s what I'm trying to build into learning infrastructure: AI that institutions can see, shape, and govern, not just deploy and hope for the best.
You can’t hand a black box to an institution and call it governance.
France Hoang
Why bring open models like Nemotron into regulated educational settings instead of relying only on proprietary systems?
AI in higher education isn’t a convenience feature anymore; it’s becoming critical infrastructure. And if you build that infrastructure entirely on closed systems, you inherit all their constraints, including opaque behavior, vendor lock-in, and limited room to adapt as pedagogy and regulation evolve.
Open models give institutions something closer to a lab environment than a mysterious external service. You can inspect them, benchmark them, and govern their behavior as conditions change. Integrating NVIDIA Nemotron into BoodleBox is about building the kind of ecosystem that fosters more institutional control, resilience, and a better foundation for long-term responsibility.
What does “collaborative AI” actually mean in a university, and how is it different from standalone black-box tools?
Most AI tools today are essentially solitary experiences, with one person, one prompt, and one private thread that nobody else ever sees. Collaborative AI inverts that. It creates a shared workspace where students, faculty, and multiple models work together in the open.
Practically, that means group assignments where everyone can see and critique what the AI contributed. Instructors designing structured prompts for an entire cohort. Students comparing outputs from different models side by side, all inside a transparent history that faculty can review and assess. The difference is simple. It’s not a private chat window. It’s an institutional learning environment.
When I watched AI isolate learners and educators instead of connecting them, it crystallized this for me. To draw an analogy to the invention of the automobile: the world doesn’t need faster horses – it needs better roads. And those roads should bring us together, not take us apart.
How does a shared AI environment change the governance equation compared to individual AI access?
When every student is off using their own mix of AI tools, governance becomes impossible. Institutions are chasing unknown vendors, inconsistent data practices, and interactions that never surface in modern classrooms.
A shared environment changes the dynamic entirely. You define guardrails once, apply them consistently, and can observe how AI is being used across courses and departments. That's what makes FERPA compliance and academic integrity tractable—not policing AI from the outside, but governing it from the inside, with real context.
A shared environment makes AI governable from the inside, not something institutions try to control from the outside.
France Hoang
What lessons from defense and government inform how AI should be deployed in higher education?
In national security environments, many assume any powerful tool will eventually be misused. That’s not meant to be pessimistic, but realistic. So, resilience needs to be built in by default from the start. Plan for edge cases, put real oversight in place, and never treat “trust me” as a control.
In higher education, that translates into a few firm principles: don’t deploy opaque capabilities that can’t be explained to students, faculty, or regulators; separate access from authority so students can experiment, but institutions set the rules; and treat governance, logging, and red teaming as core features, not something that’s bolted on later.
Many AI tools optimize for speed and individual productivity. Why is a transparency-and-shared-reasoning model better for institutions?
Speed is genuinely intoxicating. But institutions aren’t in the business of efficiency for its own sake; they’re in the business of learning, trust, and outcomes. A tool that generates an answer in two seconds isn’t useful if no one sees how it was produced, whether it’s reliable, or how it shaped student thinking.
If we treat education as a transaction, we shouldn’t be surprised when students do too. AI allows students to optimize education in ways that undermine learning but optimize education as a transaction.
When the process is visible, including the prompts, revisions, the alternatives considered, and the human commentary, AI becomes a learning partner rather than a shortcut. And for institutions concerned about pedagogy, assessment, and academic integrity, that visibility is the whole ballgame.
How important is it that students understand the underlying model ecosystem, not just the interface?
If students only learn to click a button on one branded tool, the system has failed them. The future of work involves navigating a heterogeneous ecosystem that includes closed models, open models, domain-specific models, and custom institutional models. Students need to understand how to move across all of it to be prepared to enter modern workforces.
Putting multiple models inside a governed learning environment gives students the ability to learn how different models behave, where they fail, and how to route tasks appropriately. Creating this AI-native environment is the difference between being a passive consumer of AI and someone who can use it critically, informed, and collaboratively.
Open vs. proprietary AI: what are the institutional trade-offs between flexibility, performance, control, and risk?
Open models offer real transparency and adaptability. Universities can inspect them, benchmark them, fine-tune them, align them with their own governance standards. When AI becomes part of academic infrastructure, not just a plug-in, that level of control matters a great deal.
That said, open models aren’t “set it and forget it.” They require genuine institutional capacity through thoughtful deployment, ongoing evaluation, model monitoring, and clear governance structures.
The flip side is that institution-built proprietary models often age badly. Academic procurement and internal development timelines move far slower than AI innovation. By the time a custom closed system ships, it can already be behind the frontier, which defeats the purpose for students who need exposure to current capabilities.
Tulane and Texas A&M are deploying collaborative AI across business programs. How are faculty redesigning curriculum around AI rather than simply adding a tool?
The shift we're seeing is from "AI as a one-off assignment" to "AI as an embedded competency." In business programs, that means weaving it into case analyses, group projects, and simulations, not relegating it to a single week on the syllabus.
Faculty are designing longitudinal experiences such as prompt literacy in one course, AI-assisted analysis in another, and ethics and governance woven into capstones. A shared environment makes this possible because students aren’t starting from scratch every semester, and instructors can see and assess the AI-assisted process, not just the final output.
How should boards and executive leadership think about AI governance at scale as it embeds into coursework, research, and operations?
Boards should treat AI less as a software or tool to purchase and more like a new layer of institutional infrastructure, like a learning management system or cloud computing. The questions then become: What are our AI principles? What environments do we authorize? How do we monitor usage and outcomes over time?
In practice, that means standardizing a small number of governed environments rather than letting a sprawl of unvetted tools take hold. It means aligning AI policy with existing privacy, integrity, and research compliance frameworks. And it means establishing clear ownership across IT, academic leadership, and risk management. Get the architecture and governance right, and institutions can innovate boldly and safely.
How do you lead cross-institutional collaboration when incentives and risk appetites differ?
Joint operations taught me that alignment doesn’t start with technology; it starts with shared stakes and shared language. It is necessary to be explicit about the mission, the constraints, and where different parties draw their lines on risk.
A big part of my job is translation: between technologists and faculty, between legal and academic leadership, between early adopters and their more cautious peers. Differences aren’t erased in incentives. They are made to be visible, negotiated around honestly, and to keep everyone focused on the learners that are ultimately being served.
You’ve reported measurable improvements in prompting skills and AI fluency. What does meaningful AI literacy look like beyond basic usage?
AI literacy is not “I know how to ask ChatGPT a question.” It looks more like students who can frame a problem, select appropriate models, and iterate meaningfully on outputs. Faculty who design assignments that require students to show their process, not just the final answer. Graduates who can discuss bias, reliability, and governance as fluently as they talk about Excel or statistics.
When work fundamentally becomes AI-native, educators can actually observe how students prompt, iterate, and collaborate over time. That’s where real fluency shows up, not just familiarity, but genuine capability.
As infrastructure becomes GPU-accelerated and model routing more dynamic, what does the next phase of institutional AI architecture look like?
Less “one model in one app,” more intelligent routing layer across many models and use cases. Institutions will want a unified interface for users with a backend that selects the right model based on task, cost, privacy, and performance requirements.
Our work with NVIDIA is a step in that direction: GPU-accelerated infrastructure, Nemotron open models, and the flexibility to layer in others as the ecosystem evolves. Over time, I expect to see more policy-aware routing, where institutional rules around data sensitivity, course context, and user role help determine what capabilities get invoked.
How do you balance experimentation with accountability in environments that shape long-term societal outcomes, like universities?
Universities are where society rehearses the future. They have a genuine obligation to let students experiment with AI, but not in a way that’s unbounded, unobserved, or inequitable.
The goal is to think in terms of safe sandboxes rather than wild frontiers. Students should be able to try multiple models, push creative limits, and see where AI fails, all inside an environment with institutional oversight, transparent histories, and clear norms for disclosure and attribution. Bold experimentation with structured accountability is what produces graduates who can both harness AI and challenge it.
You spend time (a lot of time) setting left and right limits … then you run like mad between those limits.
In five years, will collaborative AI be the default institutional model across education and enterprise? What barriers remain?
I believe collaborative AI becomes the default anywhere learning, governance, and team-based work matter, which is where most institutions operate. It fits how they function far better than isolated, opaque tools ever will.
The barriers that remain are less technical than organizational: change management for faculty and staff who are already stretched thin, policy uncertainty that makes leaders hesitant to commit, and procurement cycles that lag well behind the pace of AI development. But success stories are accumulating, and governance models are maturing. Collaborative AI is moving from “innovative” to “expected,” much the way learning management systems did a generation ago.




