0:00
/
0:00

How 3% of companies win with AI while 97% fail

Rich Mendis from Bytemethod AI reveals the two misconceptions killing most enterprise AI projects and the proven framework that delivers ROI

Listen now on YouTube | Spotify | Apple Podcasts

The Data Faces Podcast with Rich Mendis, CMO of Bytemethod AI

While most companies burn AI budgets on failed experiments, a ServiceNow configuration that used to take IT teams three weeks now happens in 15 minutes. The difference isn't better technology, but rather, it's understanding what enterprise AI agents actually do.

Rich Mendis (Richard’s Substack) has watched both the winners and losers up close. As Chief Marketing Officer at Bytemethod AI, he's seen Fortune 500 companies waste millions trying to "plug in" customer service agents that couldn't understand their own product terminology. He's also seen marketing teams eliminate 40 hours of weekly manual work by deploying agents that understand their business context.

About Rich Mendis

Rich Mendis is the Chief Marketing Officer at Bytemethod AI and has been in the AI space for nearly five years. He previously worked at Higher Logic, where he applied AI and agentic AI to HR and staffing. Rich is now helping Dexion, one of the top 10 IT services and staffing firms in the country, build out their AI subsidiary. He has been involved in developing responsible AI management systems and standards like ISO 42001.

"97% of businesses are struggling to show value in AI because they're just throwing stuff up against the wall without really much thought to see what sticks."

— Rich Mendis, Chief Marketing Officer, Bytemethod AI

We're in what Rich calls "the Netscape Navigator days" of enterprise AI agents. The technology works, but most people don't know how to use it yet. The companies figuring it out now are building advantages that will be hard to catch.

What separates the 3% who succeed from the 97% who struggle? They avoid two critical misconceptions that kill most AI agent projects before they start.

Why most AI agent initiatives fail: the two critical misconceptions

Most executives approach enterprise AI agents the same way they'd buy enterprise software. They want to see a demo, negotiate a contract, and flip a switch. This thinking kills projects because of two fundamental misunderstandings about how the technology works.

Misconception #1: AI agents are plug-and-play products

A Fortune 500 company spent $2M trying to "plug in" a customer service agent that couldn't understand their product terminology. The agent worked beautifully in demos using generic examples but failed spectacularly when customers asked real questions about their specific products.

"People tend to think of this technology as a general capability...like a product that you can just plug in...But agentic AI is more of a horizontal capability like electricity or the web."

— Rich Mendis, Chief Marketing Officer, Bytemethod AI

Enterprise AI agents aren't products you install. They're capabilities you apply to specific business processes, just like electricity or the internet. Saying "we use AI" is as meaningless as saying "we use the web." The value comes from how you wire that capability into your workflows.

This misconception explains why the 97% keep failing. They're looking for magic bullets instead of building business-specific automation.

Misconception #2: AI will replace entire jobs immediately

Companies in the struggling 97% expect AI agents to replace entire job functions tomorrow. They create elaborate org charts showing which roles will disappear. Then reality hits. The technology isn't ready for that level of autonomy, and the AI agent ROI calculations fall apart.

Rich references Amara's Law, which states that humans overestimate the near-term impact of technology and underestimate its long-term impact. Right now, most enterprise AI implementations are deep in the overestimation phase.

The successful 3% ask different questions. Instead of "what humans can AI agents replace," they ask "how can AI agents complement our people." They stop trying to automate entire jobs and start automating the boring parts of jobs. AI plus human beats AI versus human every time.

So what does successful enterprise AI agent implementation actually look like? The 3% focus on two specific categories that deliver measurable business results.

Proven enterprise applications: design-time and runtime solutions

"We bucket use cases into two categories, design time use cases, or productivity...which is the configuration and maintenance of enterprise assets, and then the runtime."

— Rich Mendis, Chief Marketing Officer, Bytemethod AI

The 3% who succeed with business AI automation divide deployments into two buckets:

  1. Design-time applications - the behind-the-scenes work that keeps your systems running

  2. Runtime applications - the daily tasks that slow everyone down

Design-time applications: automating system configuration

Consider how much time your teams spend configuring enterprise software. ServiceNow, Salesforce, SAP. These platforms need constant tweaking as business requirements change.

Companies in the winning 3% use enterprise AI agents that understand ticketing requirements and can actually log into ServiceNow development environments to build what you need. Tasks that normally take days or weeks now happen in minutes. The agent reads your requirements, understands the system architecture, and executes the changes. Your IT team reviews the work instead of doing it from scratch.

Runtime applications: eliminating daily friction

Runtime applications address the repetitive tasks that eat up everyone's day. The biggest AI agent ROI comes from what Rich calls conversational analytics.

Your sales team has prospect meetings. Your HR team conducts interviews. Your customer success team handles support calls. After each conversation, someone has to update a system with what happened. Conversational agents eliminate that step by listening, understanding, and updating automatically.

Healthcare systems are seeing similar results. Instead of doctors spending hours on documentation, AI agents capture patient interactions and populate electronic health records. Legal firms use agents to extract key information from client calls and update case management systems.

Marketing teams deploy agents for content operations. These tools manage approval workflows, check brand compliance, and search digital asset libraries. Instead of routing documents through email chains, the agent knows who needs to review what and handles the entire process.

These applications work because they're built for specific business contexts, not generic internet training. The companies in the successful 3% understand this difference completely.

Implementation reality check: data, culture, and human oversight

Moving from the struggling 97% to the successful 3% requires avoiding predictable pitfalls. The early demos look amazing. Everyone gets excited. Then you deploy at scale and the magic disappears.

The data quality trap

Most enterprise AI agent projects hit this wall without warning. One client discovered their AI agent was creating duplicate ServiceNow tickets because it couldn't distinguish between similar requests. The cost? 40 hours of manual cleanup work every week.

"When you first start using an LLM...even if you haven't trained it properly, it appears like, oh, man, this is like magic...But it's only later on, when you've used it over and over again at scale, that you start to realize some of the shortcomings or inaccuracies or hallucinations."

— Rich Mendis, Chief Marketing Officer, Bytemethod AI

Generic AI models trained on internet content don't understand your business context. At his previous company, Rich's team took millions of actual interview minutes and trained models to understand who was speaking, what questions mattered, and how to separate useful information from casual conversation. Your ServiceNow configurations, sales conversations, and marketing workflows all have context that generic models miss.

The successful 3% invest in domain-specific training from day one. The struggling 97% discover this requirement after their agents start producing garbage at scale.

Creating the right organizational conditions

Technical readiness is just half the battle. Companies with perfect AI infrastructure fail because they get the human side wrong.

"Readiness is less something that should be looked for and more something that should be proactively prepared for," Rich explains. Most leaders wait for signs that their organization is ready. Leaders in the successful 3% create readiness proactively.

This starts with what Rich calls "risk-mitigated freedom to experiment." Your people need access to AI tools and the permission to try things, but within safe boundaries. That might mean sandbox environments or negotiated contracts that prevent your data from being used to train external models.

You don't need enterprise-grade data governance to start. Rich recommends beginning with basic sandbox access, APIs, and clean datasets. Perfect becomes the enemy of good when you're learning business AI automation.

Getting the incentives right

Companies in the struggling 97% make a fatal mistake. They tell people that finding efficiency means eliminating jobs. "Are you going to tell them that if you find efficiency, we're going to eliminate half of your organization?" Rich asks.

Some organizations force AI adoption with top-down mandates and performance metrics. "I've seen organizations almost force-fit AI and measure people, create these objectives, and they don't even understand what they're trying to do with AI."

"If you can use AI to do your job...that takes five days a week in four days, I'll give you the fifth day off...If you can figure out how to use AI to automate some mundane thing that everyone in the billing department hates doing, then go automate it and tell us what you'd rather spend the time on."

— Rich Mendis, Chief Marketing Officer, Bytemethod AI

The successful 3% reward creativity differently. If someone figures out how to do their job in four days instead of five, give them the fifth day off. If they automate something tedious in the billing department, ask what more interesting work they'd rather do.

This shift creates a new class of power users. Domain experts without technical backgrounds become the most effective AI adopters. Rich predicts major innovations will come from business users, not traditional IT developers.

What stays human

While the struggling 97% try to automate everything, the successful 3% reserve human judgment for decisions that matter most. Rich draws a clear line about when humans must stay in control. Hiring, firing, legal issues, financial matters, healthcare treatments, and legal rulings all require human oversight.

"The ability for humans to interpret imperfect data through the lens of ethics, understanding content and intent, is really, really important...especially in the case where you can take an action that impacts a human."

— Rich Mendis, Chief Marketing Officer, Bytemethod AI

Companies that automate routine work while keeping humans focused on high-stakes decisions will outperform those that try to automate everything. The competitive advantage comes from knowing where to draw that line.

The successful 3% are already working with standards like ISO 42001 for responsible AI management. They're building governance frameworks now, rather than scrambling to add them later.

Your roadmap from the 97% to the 3%

Rich's advice for joining the successful minority cuts through the vendor hype. Don't chase the latest AI trends or copy what worked at other companies. Focus on solving expensive problems in your business.

"Try and identify where there's the biggest opportunity for either cost savings or productivity increase, revenue increase, whatever the case might be...Take a milestone-based approach to help prove not just the technical feasibility, but the fact that you can capture the ROI and expand from there."

— Rich Mendis, Chief Marketing Officer, Bytemethod AI

Companies in the struggling 97% deploy enterprise AI agents because they're cool rather than because they solve real business problems. Start by identifying where you're losing the most money to manual work. ServiceNow configurations that take weeks. Sales reps are spending hours on CRM data entry. Marketing teams are routing approvals through endless email chains.

Think domain experts, not IT developers

The biggest insight Rich offers about joining the successful 3% centers on who will drive success. "We'll enable subject matter experts now, wherever they happen to be in the organization, regardless of their technical ability, to be able to leverage automation in really meaningful ways."

Your marketing manager who understands content workflows. Your finance person who knows where the manual reconciliation pain points are. Your HR business partner who sees the same candidate screening inefficiencies every day. These people will find the most valuable AI agent applications because they intimately understand the problems in their domain.

Rich predicts, "We'll see lots of interesting solutions come from people who are not necessarily your traditional IT developer sort of folks."

Remember, we're still in the "Netscape Navigator days" of enterprise AI agents. The companies experimenting now with clear business objectives, the right people, and safe environments will separate themselves from the 97% who continue throwing money at the wall. The question isn't whether AI agents will transform enterprise operations—it's whether you'll join the 3% who figure it out first.

About David Sweenor

David Sweenor is an AI, Generative AI, and Product Marketing Expert. He brings this expertise to the forefront as the founder of TinyTechGuides and host of the Data Faces podcast. A recognized top 25 analytics thought leader and international speaker, David specializes in practical business applications of artificial intelligence and advanced analytics.

Books

With over 25 years of hands-on experience implementing AI and analytics solutions, David has supported organizations including Alation, Alteryx, TIBCO, SAS, IBM, Dell, and Quest. His work spans marketing leadership, analytics implementation, and specialized expertise in AI, machine learning, data science, IoT, and business intelligence.

David holds several patents and consistently delivers insights that bridge technical capabilities with business value.

Follow David on Twitter @DavidSweenor and connect with him on LinkedIn.

Podcast Highlights - Key Takeaways from the Conversation

Guest: Rich Mendis, Chief Marketing Officer at Bytemethod AI
Host: David Sweenor, Data Faces Podcast
Topic: Agents in the Enterprise - Reality vs Hype

Rich's Background & Company Overview

  • 0:45-1:14 - Rich has been in AI space for 5 years, previously at Higher Logic applying AI to HR/staffing, now at Bytemethod AI (subsidiary of Dexion, top 10 IT services firm)

Defining AI Agents in Enterprise

  • 2:21-3:42 - AI agent definition: "software that can reason and act autonomously on behalf of a human" with understanding of state and memory, can interact with other systems/agents/people

  • Key insight: Should interact with humans for decisions impacting other humans (hiring, firing, legal, financial matters)

Marketing Use Cases

  • 3:53-4:47 - Marketing AI agents: content operations, funnel content for approvals, apply brand voice/compliance, find existing assets in digital asset management

Specialist vs Generalist Agents

  • 5:03-5:44 - Mix of both: horizontal agents across business units vs highly specialized agents (hospital example: X-ray technician agents vs billing agents)

Major Misconceptions

  • 5:57-7:49 - Misconception #1: People think AI is a plug-in product vs. horizontal capability like electricity/web

  • Key stat: "97% of businesses are struggling to show value in AI because they're just throwing stuff up against the wall"

  • Misconception #2: Overestimation in near term, underestimation long-term (Amara's Law)

  • Reality: AI + human collaboration, not AI vs human

Real-World Use Cases

  • 8:16-10:38 - Design-time: ServiceNow configuration (days/weeks → automated), agent logs in and configures to requirements

  • Runtime: Conversational analytics - listens to meetings, understands context, populates systems automatically

Verification & Quality Control

  • 11:11-13:00 - Importance of domain-specific training vs generic models

  • Example: Interview analysis requires training on actual interview data (millions of minutes), not internet content

Data Quality Challenges

  • 16:14-18:19 - Biggest challenge: easy to underestimate data investment

  • Problem: Appears like "magic" initially, but shortcomings emerge at scale

  • Solution: Invest in understanding required data, quality, annotation, proper chunking

Organizational Readiness

  • 19:18-21:40 - Key factors:

    1. "Risk-mitigated freedom" to experiment (sandbox environments)

    2. Right incentives - avoid threatening job elimination

    3. Positive incentive example: "4-day work week if you can do 5-day job in 4 days"

Democratization of AI

  • 22:23-23:42 - Domain knowledge becomes more important than technical skills

  • Subject matter experts will become power users regardless of technical ability

Trust & Human Oversight

  • 29:26-31:25 - Trust requirements: data provenance, behavioral guardrails

  • Critical decisions requiring humans: Hiring, firing, legal, financial, healthcare, legal rulings

  • Human advantage: Interpret imperfect data through ethics lens, understand context/intent

Getting Started Advice

  • 33:48-34:33 - Focus on "ROI driven value" not "AI for the sake of it"

  • Identify biggest opportunities for cost savings/productivity/revenue

  • Use milestone-based approach to prove technical feasibility AND ROI

Future of Work Perspective

  • 24:26-29:25 - Rich's view: Jobs won't disappear overnight, but skills within jobs will change

  • AI replaces skills, not entire jobs - jobs evolve into new forms

  • Historical precedent: Programming democratization (mainframe → 4GL → conversational)

Key Takeaways

  1. 97% failure rate due to misconceptions about plug-and-play nature and job replacement expectations

  2. Two successful categories: Design-time (system configuration) and Runtime (daily task automation)

  3. Critical success factors: Domain-specific training, proper incentives, human-in-the-loop for critical decisions

Future insight: Domain experts will become AI power users, not just IT developers

Discussion about this video

User's avatar