In this exclusive interview, we speak with Tommy Tran, a Software Engineer at Ramp (he is at Meta currently), whose career journey spans from optimizing game engines at Ubisoft to tackling large-scale AI challenges at Meta and Ramp. Tommy shares his insights on transitioning into AI and ML, implementing innovative solutions while maintaining platform reliability, and the role of AI in driving U.S. business and economic growth. This conversation is packed with practical takeaways and forward-looking perspectives, from advice for smaller companies exploring AI to discussing the future of generative AI and augmented reality. Dive in to explore how Tommy bridges technical expertise with strategic impact.
Read more interviews like this here: Rishitha Kokku, Senior Software Engineer — DevOps Specialization, AI in DevOps, Infrastructure as Code, High-Performance Teams, and the Future of AI in Software Engineering
What inspired your transition from game development at Ubisoft to leveraging advanced AI and ML for large-scale challenges at Meta and Ramp?
While working on graph-based modeling and real-time simulations at Ubisoft, I realized the core principles behind optimizing a game engine—handling massive data flows, real-time decision-making, and predictive modeling—are directly applicable to complex AI challenges in other sectors. Game development taught me the value of high-performance computing and how to push real-time systems to their limits. At Meta and Ramp, I saw a chance to take these advanced computational methods further by integrating state-of-the-art AI techniques—from transformer-based architectures for text generation to reinforcement learning for resource allocation—into large-scale platforms. This shift let me tackle critical issues likead monetization, and customer data optimization, all of which significantly drive economic gains and operational efficiency in U.S.-based companies.
How do you measure the real-world impact of your AI solutions, especially when quantifying cost savings or revenue gains?
I rely on structured experiments—such as A/B testing or multi-armed bandits—to verify that a given AI model truly drives improvement. For example, I deploy a model to a subset of users or data segments and compare its performance against a control group using well-defined metrics like conversion rates or cost per acquisition. On the financial side, I map these improvements to relevant business metrics—like monthly revenue, resource utilization, or operational costs. By continuously linking model outputs to enterprise-level outcomes, I can demonstrate how each AI-driven change translates directly into measurable gains, whether that means higher user engagement, reduced infrastructure spending, or more efficient sales operations. This data-driven validation not only proves ROI but also builds confidence among stakeholders, paving the way for broader adoption of AI initiatives. o1
How do you balance adding innovative AI features with maintaining a robust, reliable customer data platform?
I adopt a tiered rollout strategy. Initially, new AI features—such as advanced sentiment analysis or generative text recommendations—are tested on a small subset of real-world data, typically in a “dark launch” phase where outputs are visible only to a handful of internal stakeholders. We gather performance metrics (accuracy, latency) and reliability indicators (crash reports, error logs). Once the feature shows stability and clear value, we gradually scale it to the entire customer data platform. Alongside, we maintain clear fallback mechanisms so that if the AI service experiences downtime or unexpected behavior, essential data workflows remain unaffected. This approach ensures cutting-edge AI can coexist with mission-critical reliability.
What advice would you give smaller U.S. companies looking to introduce AI without large, dedicated AI teams?
Smaller companies can start by defining a focused pilot project with tangible goals—like automating parts of their customer service or generating targeted marketing insights. They don’t need a full-blown AI department right away; a small, cross-functional task force can identify a high-impact problem, gather relevant data, and use off-the-shelf tools or pre-trained models to experiment. From there, I encourage them to leverage cloud-based AI services (AWS, GCP, Azure) which provide scalable compute and user-friendly ML frameworks. Engaging with consultants or local AI meetups for guidance also helps. The combination of a modest, well-defined scope and easily accessible tooling often leads to early wins that boost confidence and build momentum for more advanced efforts, benefiting the U.S. economy by fostering grassroots AI innovation.
Can you describe a scenario that required intense cross-functional coordination to deliver an AI-driven feature?
I worked on developing a sentiment analysis solution to help a sales organization more effectively manage and respond to incoming emails. This project involved close collaboration across product managers, data scientists, engineers, and the sales leadership team. While data scientists refined the natural language processing (NLP) model—making it more precise at categorizing message tone—the engineering group focused on seamlessly integrating this model into the existing communication platform. Leadership defined the priority tags and guided how the outputs would be used in daily workflows. What made the effort successful was the real-time feedback loop from the sales team. Anytime the model misclassified an email, their immediate input highlighted gaps in our training data, giving us actionable steps to improve performance. By combining domain expertise from sales with ML knowledge from the data science team, we rolled out a feature that significantly boosted both response times and overall customer satisfaction.
Where do you see AI having the biggest impact on U.S. businesses and the overall economy in the next five to ten years?
I believe widespread automation of operational tasks—from supply chain logistics to back-office processing—will supercharge productivity. At the same time, personalized AI experiences will enhance customer engagement, from predictive healthcare solutions to adaptive e-learning platforms. By refining these AI-driven efficiencies, American firms stand to save billions of dollars, while reinvesting those savings into further research, product innovation, and job creation. Additionally, the co-evolution of AI and edge computing means even smaller companies can harness advanced ML without needing massive infrastructure. This democratization of AI will spur broad economic growth and keep the United States on the cutting edge of global tech leadership.
What advice do you have for engineers looking to combine technical AI expertise with strategic business insights?
First, master the fundamentals of ML: gain hands-on experience with frameworks like PyTorch or TensorFlow, explore large-scale data processing (Spark, Hadoop), and practice setting up robust MLOps pipelines. Second, learn to translate AI jargon into tangible business metrics—show executives how a 2% uptick in model accuracy can produce millions in additional revenue or yield significant cost savings. Finally, prioritize ethical and privacy considerations from the start. This not only prevents legal hurdles but also builds trust with users, which is increasingly critical for AI deployments. By blending a deep technical foundation with an understanding of organizational goals, you’ll stand out as an engineer who drives meaningful, large-scale impact.
How do you see your role in helping the U.S. maintain its global leadership in AI?
One of my core missions is to democratize AI knowledge. By mentoring engineers and shaping educational programs, I help bridge gaps between cutting-edge research and practical applications. This includes: Guiding advanced students or junior engineers on best practices in model deployment, MLOps, and ensuring robust data pipelines. Advising businesses on navigating regulatory hurdles while adopting transformative AI solutions. Promoting a culture of ethical and sustainable AI so that as we innovate, we also maintain public trust. Such mentorship doesn’t just benefit individual learners—it fuels the broader U.S. innovation engine. When the next generation of AI practitioners is well-prepared to integrate new technologies responsibly, America remains at the forefront of global competitiveness, creating economic growth and high-value jobs that sustain leadership in the tech sector.
How do you approach teaching AI fundamentals to organizations or engineers who are new to the field?
I start by breaking down core AI principles—such as data preparation, model architecture, and performance metrics—into manageable modules. For instance, I’ll cover fundamental concepts like feature engineering or gradient descent in workshop-style sessions that include hands-on coding exercises. The key is to balance theoretical depth with real-world application, so people immediately see how AI can solve their specific business problems. To ensure effective learning, I’ll often leverage interactive notebooks (e.g., Jupyter or Colab) that walk learners step-by-step through an example dataset and model training process. This approach gives immediate feedback, builds confidence, and lays a solid foundation for more advanced techniques—benefitting both new engineers and non-technical stakeholders who want to understand AI’s strategic potential.
What excites you most about the intersection of generative AI, augmented reality, and customer experience in the next decade?
The convergence of generative AI and AR will unlock entirely new ways of interacting with digital content—imagine personalized product demonstrations or immersive educational experiences that adapt instantly to user input. This evolution will shift how we shop, learn, and socialize, with rich, contextually aware AI guiding everything from marketing campaigns to daily convenience tasks. From a broader standpoint, these technologies hold enormous economic and social potential for the U.S. They can spur growth by creating new markets and job roles, while also providing cost-effective solutions for critical challenges in healthcare, retail, and education. It’s an exciting era, and I’m committed to helping shape it responsibly and innovatively.
How do you ensure that the AI models you build align with both business objectives and ethical standards?
I integrate transparent model governance and stakeholder collaboration early on. Before a model is deployed, product managers, compliance officers, and data scientists review its intended goals and potential ethical pitfalls—like biased training data or unintended societal impacts. We also implement continuous auditing, where model outputs are monitored for anomalies or indicators of bias. If issues arise, we adapt our training data or the model architecture as needed. By combining human oversight with robust technical checks, we ensure that our AI remains aligned with both business success and societal well-being.
How do you handle data privacy and compliance (e.g., GDPR) while still achieving strong AI model performance?
I balance privacy protections and model performance by structuring data pipelines to collect only the minimum necessary features—often anonymized or aggregated—to train robust models. For instance, under GDPR restrictions, I implemented differential privacy for user data, ensuring no personally identifiable information was exposed. When user-level signals were limited, I turned to methods like multi-armed bandit algorithms or contextual embeddings that are less dependent on granular personal data. This preserved predictive power while honoring privacy regulations. Additionally, every model deployment includes periodic compliance audits to confirm data usage aligns with legal standards and user trust.




