Get Updates
Get notified of breaking news, exclusive insights, and must-see stories!

AI In Consumer Tech And Enterprise: Insights From Praveen Ellupai Asthagiri

Praveen Ellupai Asthagiri discusses the intersection of consumer and enterprise AI, highlighting their converging paths and the importance of governance and usability in AI design.

Insights on AI from Praveen Ellupai Asthagiri

In the last few years, AI has moved from a curiosity to something people expect in both their living rooms and their work tools. Voice assistants sit in speakers, TVs, and phones, handling billions of interactions every day, with an estimated 8.4 billion assistants now in use worldwide. At the same time, enterprises are racing to operationalize generative AI across functions, with global organizations projected to spend hundreds of billions of dollars on AI solutions over the next few years. Those two worlds often look separate from the outside. Consumer AI is judged on convenience and delight. Enterprise AI is judged on compliance, uptime, and measurable return. In practice, the best programs increasingly borrow from both. Principal Technical Program Manager Praveen Ellupai Asthagiri, sits exactly at that intersection. With more than 16 years of experience in AI and machine learning, conversational technologies, and enterprise transformation, he has led large scale GenAI personalization initiatives and monetization programs with more than $450 million in impact for a global consumer AI assistant, while also driving cross-organizational governance models and digital modernization projects for enterprises. In this conversation, we spoke with Praveen about what consumer and enterprise teams get wrong about each other, where their paths are converging, and how his work on interaction history, personalization, and monetization informs both sides.

AI Summary

AI-generated summary, reviewed by editors

Praveen Ellupai Asthagiri discusses the intersection of consumer and enterprise AI, highlighting their converging paths and the importance of governance and usability in AI design.

Praveen, thanks for joining us. When people talk about “consumer AI” versus “enterprise AI,” what do they usually get wrong about that distinction?

A lot of conversations treat them as two different species. Consumer AI is framed as playful and experimental, something you talk to on your couch. Enterprise AI is framed as rigid and serious, something that lives in a data center behind a ticketing system. In reality, they are starting to share more DNA than people think. On the consumer side, expectations are unforgiving. If an assistant forgets what you said two turns ago or gives you a recipe that ignores the fact that your family is vegetarian, the trust hit is immediate. On the enterprise side, the interactions are fewer but each one carries more weight, because the output may drive a financial decision, a compliance judgment, or a customer-facing workflow. The mistake is assuming that consumer AI can ignore governance, or that enterprise AI can ignore usability. The real work is bringing the discipline of enterprise systems to consumer-scale assistants and bringing the empathy and clarity of consumer experiences into enterprise tools.

You've led large-scale short-term and long-term memory programs for a major consumer AI assistant. How did that shape your view of what "good AI" feels like to everyday users?

Working on memory systems forces you to think about people's lives, not just their queries. The system I helped lead was built around simple questions: When someone talks to an assistant across multiple devices and over many days, what should it remember, what should it forget, and how do we keep that consistent without slowing things down? We designed memory to understand patterns like recent context, ongoing activities, preferences, and who is speaking, while also respecting boundaries. The goal was not to remember everything. It was to remember the right things so that three steps later, when you say "What should I cook for my mom tonight?" the assistant already knows she is vegetarian without asking again. That sort of continuity is invisible when it works, which is exactly the point. Users don't think in terms of "multi-turn context windows." They think in terms of "Did this thing listen to what I already told it?" Once you internalize that, you start to see AI less as a single response engine and more as a long-running relationship.

Enterprises are also investing heavily in generative AI. Where do you see the biggest differences in how consumer and enterprise teams approach AI today?

The most obvious difference is the unit of risk. In consumer products, you care about trust at scale. One awkward answer is not catastrophic, but the pattern matters. In enterprises, the downside of a single bad answer can be material: a wrong recommendation in a regulated workflow, an off-policy decision in finance, or a misrouted case in customer support. That difference shows up in how teams design their systems. Consumer AI teams obsess over latency, recovery, and delight. Enterprise AI teams obsess over audit trails, role-based access, and reproducibility. What I see changing is that both now realize they need pieces of the other. Consumer teams are embedding stricter governance because people are using assistants for more sensitive tasks. Enterprise teams are experimenting with more conversational interfaces because employees are pushing back on rigid tools. One useful way to frame it is that consumer AI starts with “Can we make this feel natural?” and enterprise AI starts with “Can we make this safe and explainable?” Convergence happens when you refuse to treat those as competing goals.

You have also led monetization programs with more than $450 million in impact for a large scale AI assistant. What did those programs teach you about aligning AI value with business value?

Monetization at that scale is really a test of alignment. If people do not trust the assistant, they will not adopt new paid features, and if the features do not genuinely solve problems, they will churn. On the other side, if the business cannot see a clear line from AI behavior to revenue or savings, investment dries up. We approached monetization as a systems problem. First, make the assistant meaningfully more helpful by improving the quality of its memory and personalization. Then, understand which experiences people value enough to pay for, without turning every interaction into an upsell. Finally, build measurement that connects changes in AI behavior to long-term outcomes, not just quick wins. The key lesson is that monetization is not a separate track from experience and governance. The same rigor that keeps context accurate and privacy respected is what allows you to justify sustained investment. When leaders can see that a better conversation history or a smarter personalization layer is tied to hundreds of millions of dollars in impact, they stop treating AI as a side project and start treating it as core infrastructure.

How does that translate when you work with enterprises on their own AI and GenAI programs?

In enterprise settings, the story often starts with a proof of concept in one team. Maybe it is a conversational assistant for internal support, maybe it is a GenAI tool for summarizing documents. The pattern I have seen is that the first success is usually built in a silo with heroics from a small group. The challenge is turning that into a platform others can trust. This is where my background in cross-organizational governance and digital modernization comes in. You have to create shared services for things like identity-aware access, retrieval, and evaluation, and then wrap them in clear policies that owners across the company can understand. That is how you make sure an experiment in one department does not turn into a risk in another. Enterprises are spending aggressively on AI, with forecasts suggesting this will not stop anytime soon. But the pattern is clear from industry research: value only shows up when organizations move from scattered pilots to platforms with strong governance and feedback loops. The work I do is about making that shift repeatable.

Where do you see consumer and enterprise AI actually converging in practice?

One convergence is around context. Consumer assistants have had to learn how to stitch together context across devices and sessions because people talk to them in the middle of their day. Enterprises are discovering that their employees behave the same way. Someone might start a task in a browser, pick it up in a mobile app, and finish it inside a chat tool. If the AI system behind that journey does not carry the right context with the right permissions, the experience falls apart. Another convergence is around trust signals. In consumer AI, trust shows up in whether you keep talking to the assistant and whether you are willing to let it do more on your behalf. In enterprises, trust shows up in whether teams are comfortable letting AI participate in higher impact workflows. In both cases, you need transparency about what the system knows, what it just inferred, and what it is about to do. Finally, both are moving toward continuous learning from real usage rather than one-time model launches. Consumer AI learned that lesson early, because behavior changes fast. Enterprises are catching up as they see that static models fall out of sync with the business.

If you had to give one piece of advice to leaders in consumer tech and one to leaders in enterprises, what would you say?

For consumer tech leaders, I would say: treat governance and experimentation as the same pipeline. The teams that win will be the ones that can ship changes quickly while proving that those changes respect people’s data, preferences, and consent. That is not a trade off. It is a design constraint. For enterprise leaders, my advice is: pick a few deep, cross-functional journeys and make AI indispensable there before you scale. It is tempting to sprinkle AI across the organization, but the hard problems live where multiple systems and teams intersect. If you can show that AI makes those journeys faster, safer, and more explainable, expansion becomes much easier to justify. In both worlds, the role of a technical program leader is to make AI feel less like magic and more like reliable infrastructure. When you do that, you get the best of both sides: the accessibility of consumer tools with the discipline of enterprise systems.

Any closing thoughts on where this convergence is headed next?

I think we are heading toward a world where people do not distinguish between “consumer AI” and “enterprise AI” as much as they do today. They will just notice whether the systems around them remember what matters, respect what should be private, and help them move faster with confidence. The encouraging sign is that both sides are learning. Consumer AI is getting more serious about policy and long-term value. Enterprises are getting more serious about usability and human-centered design. My job is to keep building programs that make that convergence practical rather than theoretical. When that happens, the real winners are the people on the other side of the screen or the speaker. They will simply feel like the systems around them are finally listening and acting in ways that align with their goals, whether they are cooking dinner at home or making decisions in a boardroom.

Notifications
Settings
Clear Notifications
Notifications
Use the toggle to switch on notifications
  • Block for 8 hours
  • Block for 12 hours
  • Block for 24 hours
  • Don't block
Gender
Select your Gender
  • Male
  • Female
  • Others
Age
Select your Age Range
  • Under 18
  • 18 to 25
  • 26 to 35
  • 36 to 45
  • 45 to 55
  • 55+