Naomi Latini Wolfe has spent her career at the crossroads of AI, education, and social equity, exploring how technology can transform learning while addressing systemic inequalities. As an advocate for inclusive EdTech, she highlights both the opportunities and risks AI presents in education, from accessibility gaps to algorithmic bias. In this interview, Wolfe discusses the challenges of equitable AI-driven learning, the role of social structures in adoption, and what it will take to foster diversity in AI leadership. She also shares a bold vision for the future—one that requires urgent action to ensure AI serves all learners fairly.
Discover more interviews like this here: Shaping the Future of Learning: Esmeralda Baños on AI’s Impact in Education at Slidesgo, Freepik Company
Your work sits at the intersection of AI, education, and social equity. What initially drew you to this space, and how has your perspective evolved as AI’s role in education has expanded?
What drew me to this space was a fundamental belief that education should be a great equalizer, and I saw the potential for technology to help level the playing field. As a sociologist, I’ve always been trained to examine how social structures and cultural forces shape our identities and opportunities. Bringing that lens to education felt like a natural fit.
As AI’s role in education has grown, so has my understanding of its potential and pitfalls. For example, I’ve seen how online platforms can break down geographic and socioeconomic barriers, empowering learners through accessible and engaging experiences. But I’ve also become acutely aware of how AI can unintentionally amplify biases and systemic inequalities. That’s why I strongly advocate for proactive inclusion throughout the innovation lifecycle—from design to implementation and evaluation. We need to ask tough questions about equity every step of the way.
As an advocate for inclusive EdTech, what are some of the biggest barriers you see in achieving true equity in AI-driven learning environments, and what strategies do you recommend to overcome them?
When it comes to creating genuinely equitable AI-driven learning environments, I see a few significant hurdles. One of the most pressing is accessibility. Many students, particularly those from marginalized or low-income backgrounds, face disparities in digital literacy and technology access. Without reliable internet or devices, these students are often left behind, which only widens the existing educational gap.
Another critical challenge in AI is bias in algorithms, as many systems use historical data reflecting systemic inequalities, leading to unfair educational outcomes. This reinforces disadvantages for specific demographic groups. Ethical issues also arise, as the rapid adoption of AI often lacks clear frameworks, raising privacy and bias concerns. Lastly, there's insufficient collaboration between educators and AI experts, which hinders effective integration and alignment with educational goals.
To overcome these barriers, I recommend a multi-pronged approach:
Invest in Professional Development: Equip educators with the skills to use AI ethically and effectively.
Leverage Data Analytics: Use AI to create personalized learning pathways tailored to individual student needs.
Design Inclusively: Involve diverse stakeholders, including marginalized groups, in AI development.
Advocate for Equity-Focused Policies: Push for regulations that prioritize ethical AI use and diverse representation.
Ultimately, achieving equity requires a collaborative and adaptive approach that ensures all students feel supported and empowered.
Your research explores the ethical implications of AI in education. What are some overlooked biases in AI-driven learning systems, and how can educators and developers work together to mitigate them?
One often overlooked bias is how social inequalities can become embedded in the data AI systems are trained on, which then replicates in their decision-making. AI isn’t neutral—it reflects its creators' and data's values and biases.
To address this, collaboration between educators and developers is key:
Educators bring insights into learners’ diverse needs, helping identify potential biases.
Developers can make systems more transparent and accountable, allowing educators to understand and challenge decisions.
For example, in my work on inclusive course design, I’ve seen how AI tools used for student assessments can unintentionally disadvantage non-native English speakers due to language biases in the algorithms. By working with developers, the system can be adjusted to account for linguistic diversity, ensuring fairer outcomes for all students.
You’ve led a $3M grant project focused on evidence-based programs for national dissemination. Can you share a defining challenge you faced in this initiative and how you addressed it?
One defining challenge was ensuring seamless execution across 20+ diverse sites, each with unique contexts and resources. To address this, we focused on clear communication, thorough training, and ongoing support.
For example, I directed and trained 20+ partner teams through the launch process, ensuring everyone was equipped with the needed tools. We also closely monitored key metrics and coordinated data reviews to address real-time challenges. It was a complex undertaking, but seeing the program’s positive impact on communities made it incredibly rewarding.
Your textbooks emphasize solutions-oriented approaches to societal challenges. What’s an example of a breakthrough insight or case study from your work that has reshaped how educators approach inclusive course design?
Yes, in my textbook, Social Problems and Silver Linings, I really wanted to emphasize that students aren't just passive observers of social problems, but active agents of change. I wanted to empower them to see themselves as part of the solution.
One breakthrough insight that has shaped how I, and hopefully other educators, approach inclusive course design is the importance of promoting proactive inclusion throughout the innovation lifecycle. It's not enough to simply add diverse content or address equity as an afterthought.
For example, I worked on a course where I involved students from diverse backgrounds in the design process. Their input led to more inclusive materials and teaching methods, increasing engagement and success rates. We need to think about inclusion from the beginning, ensuring that all voices are heard and perspectives are valued.
As a Google Women Techmakers Ambassador and a strong advocate for women in AI, what changes do you think are most critical to fostering gender inclusivity in AI leadership and research?
As a Google Women Techmakers Ambassador, this topic is near and dear to my heart. I believe there are several critical changes we need to make to foster gender inclusivity in AI leadership and research:
First, mentorship and sponsorship are essential. We need to create more opportunities for women to connect with experienced mentors who can provide guidance and support. We also need to encourage women to proactively advocate for each other's advancement, whether that's through promotions or project opportunities.
Second, we need to build strong, supportive networks where women feel safe sharing experiences and offering support. These networks can be a lifeline, providing a sense of community and belonging in what sometimes feels like a very isolating field.
Third, we must address internalized biases and challenge the stereotypes holding women back. That means having open and honest conversations about gender dynamics and working together to create a more equitable culture.
Finally, I believe in leveraging digital tools to connect and amplify women’s voices in tech.
And, of course, it's critical to emphasize intersectionality, recognizing the unique challenges faced by women from diverse backgrounds. Women of color, LGBTQ+ women, and women with disabilities may face additional barriers, and we need to be mindful of those experiences.
With your background in sociology and technology, how do you see social structures influencing the adoption and effectiveness of AI in higher education, and what systemic changes do you believe are necessary?
Social structures significantly shape AI’s adoption and effectiveness in higher education. For example, systemic inequalities can lead to biased algorithms that disadvantage certain groups.
To address this, we need:
Equitable Access: Ensure all students can access AI tools, regardless of socioeconomic background.
Ethical Frameworks: Develop guidelines for responsible AI use, addressing bias and privacy.
Digital Literacy Training: Equip students and educators with the skills to navigate AI-driven environments.
Inclusive Design: Involve diverse stakeholders in AI development to ensure equitable systems.
By addressing biases, ensuring transparency, and involving all stakeholders, higher education institutions can harness AI's potential while upholding social equity and ethical standards.
Looking ahead, what’s a bold prediction you have for the future of AI in education, and what steps do we need to take now to ensure that future is both inclusive and effective?
Okay, here's my bold prediction: AI has the potential to revolutionize education, but it also has the potential to exacerbate existing inequalities and strain our planet. AI solutions must benefit all members of society, especially underrepresented groups. It really boils down to the choices we make today.
To ensure that the future of AI in education is both inclusive and effective, we need to:
Prioritize responsible AI development and deployment. That means addressing bias, protecting privacy, and ensuring accountability.
Invest in digital literacy and skills training for all learners. We need to equip everyone to not only use AI tools but also to understand their limitations and ethical implications.
Foster collaboration and knowledge-sharing across disciplines. Educators, developers, policymakers, and community members need to work together to shape AI's future in education.
Promote sustainability. By joining communities dedicated to sustainability, we can balance AI's promise with its environmental impact.
Ultimately, it's about ensuring that AI empowers learners, promotes equity, and creates a more just and sustainable world.





