Skip to content
Jensen Huang CEO of NVIDIA at CES 2025 talking about AI Agents

Jensen Huang Got It Wrong: Who Should Really Manage AI Agents

At CES 2025, NVIDIA CEO Jensen Huang made a keynote statement that sent waves through the tech world: “The IT department of every company is going to be the HR department of AI agents in the future.” It’s a catchy soundbite—a neat encapsulation of how organizations might manage the rise of AI agents and Virtual Employees (VEs). But dig deeper, and the cracks in this vision become glaringly obvious.

Let’s be clear: this is a bad idea.

The IT Department’s Age-Old Problems

Jensen Huang is a visionary in many respects, but his statement reveals a troubling tendency among technology leaders to oversimplify the complexities of organizational transformation. The assumption that IT departments—traditionally known for bureaucratic bottlenecks, a reactive mindset, and infrastructure-first thinking—could suddenly become strategic stewards of AI agents is as flawed as it is shortsighted.

IT departments, for all their technical prowess, have struggled for decades with alignment. The phrase “IT-business alignment” has been a staple of industry conferences and white papers since the 1990s, yet it remains elusive. How, then, can we expect these same departments to effectively manage AI agents, entities that require not just technical maintenance but role-specific integration, continuous learning, and ethical oversight? It is like asking a hammer to sew a quilt—the tool doesn’t fit the task.

The Real Motivation

It’s not hard to see why Huang’s statement appeals to him—and to those like him. IT departments are natural customers for NVIDIA’s hardware and software solutions. Selling to an audience that already understands and invests in IT infrastructure is a far easier proposition than addressing the broader organizational changes that AI demands. But this sales-driven narrative does a disservice to the very enterprises Huang claims to serve.

The press has, predictably, seized on Huang’s soundbite with minimal scrutiny. Headlines touting “The New IT HR” are proliferating, but few articles delve into the practical challenges or unintended consequences of this model. Instead, they perpetuate the notion that IT departments can seamlessly transition into managing AI agents, ignoring the vast cultural, operational, and ethical shifts required.

The Bandwagon Effect

Take a moment to consider the recent flurry of publications jumping on this bandwagon. They paint a rosy picture of a future where IT departments effortlessly onboard, train, and manage AI agents. But these pieces often omit critical questions: How will IT departments ensure that AI agents align with business objectives? Who will be accountable when an AI agent’s decision leads to a PR disaster? And what safeguards will be in place to prevent bias or misuse?

This kind of shallow analysis is dangerous. It glosses over the hard work of building robust governance structures and designing role-based metadata frameworks—challenges that your average IT department is ill-equipped to tackle. By buying into this oversimplified vision, organizations risk pouring resources into initiatives doomed to fail.

The Better Path Forward

Rather than forcing AI agent management into the IT department’s purview, enterprises need to think holistically. AI agents and VEs require their own Centers of Excellence—dedicated teams with the cross-disciplinary expertise to manage their lifecycle effectively. These CoEs should be empowered to focus on strategic alignment, ethical governance, and continuous optimization, free from the operational baggage of traditional IT functions.

This CoE should embody three foundational principles, rooted in the emerging Laws of VE Economics:

  1. The Law of Infinite Scale: As outlined in Virtual Employee Economics, AI agents bring the promise of scalability without linear cost increases. A CoE enables this by orchestrating seamless integration across departments, leveraging role-specific metadata to ensure compatibility.
  2. The Law of Cognitive Commoditization: VEs democratize access to cognitive labor, turning complex problem-solving into a scalable resource. A CoE ensures that this potential is realized strategically, preventing wasteful deployments or misaligned initiatives.
  3. The Law of Exponential Learning: AI agents must continuously improve. The CoE acts as a hub for Reinforcement Learning with Human Feedback (RLHF) and other optimization methods to ensure agents evolve with organizational needs.

What the Ideal CoE Looks Like

An effective CoE for AI agents is not a siloed department but a cross-functional powerhouse. It combines:

  • Governance Expertise: Establishing ethical guidelines and compliance frameworks.
  • Technical Mastery: Implementing tools for monitoring agent performance and ensuring robust security.
  • Business Insight: Collaborating with stakeholders to align agents with KPIs and business goals.
  • Continuous Education: Training employees to work alongside AI and fostering a culture of innovation.

Imagine a Fortune 500 company leveraging a VE CoE to onboard agents that can automate compliance reporting across global operations. This CoE ensures the agents adhere to nuanced regulatory requirements in multiple jurisdictions, align with the CFO’s strategic vision, and continuously adapt as regulations evolve. Such precision and foresight would be impossible under the current IT-first paradigm.

A Final Critique

Huang’s statement is a reminder of the tech industry’s tendency to prioritize convenience over substance. It’s easier to declare IT departments the new HR for AI agents than to grapple with the nuanced realities of managing a digital workforce. But as leaders, we must resist the allure of easy answers and demand solutions that address the complexities of the challenge.

The future of work deserves better than soundbites and bandwagon thinking. It deserves thoughtful, informed strategies that position AI agents and VEs as true assets to the enterprise—not as another checkbox on IT’s to-do list. Let’s aim higher.

Post
Filter
Apply Filters