How “AI Workslop” Is Draining Modern Enterprises

3 min read
Workslop

A troubling phenomenon is spreading through corporate America, creating friction where artificial intelligence promised efficiency. Researchers at BetterUp and Stanford’s Social Media Lab coined the term “workslop” to describe AI-generated content that appears polished but lacks the substance to meaningfully advance any task. This represents more than a minor irritation. The research estimates that workers waste nearly two hours correcting or redoing each instance of workslop they receive, creating a financial burden that amounts to roughly $9 million annually for a company of 10,000 employees.

The damage extends beyond lost productivity. The research reveals that over half of workers who receive workslop report diminished confidence in their colleagues’ capabilities and reliability. This creates what researchers describe as a “sinking feeling of suspicion” that degrades collaboration across teams. When 54 percent view AI-using colleagues as less creative, 42 percent as less trustworthy, and 37 percent as less intelligent, the interpersonal costs may ultimately exceed the financial ones.

The instinctive response targets either the technology itself or employee laziness, but this view misses the mark entirely. Workslop proliferates in environments where employees view AI as a mysterious “black box,” a powerful tool they feel pressured to use but do not truly understand. This pressure frequently stems from pervasive automation anxiety. Recent surveys show that 89 percent of workers express concern about AI’s impact on their job security, with 65 percent anxious that AI might replace their specific role. When workers see headlines about AI-driven layoffs and fear obsolescence, they engage in what becomes performative adoption.

This creates a toxic cycle. Individuals use AI to generate content quickly just to prove they are using it, regardless of underlying quality. The goal shifts from solving a business problem to simply checking the “used AI” box. One project manager explained the predicament: “Receiving this poor quality work created a huge time waste and inconvenience for me. Since it was provided by my supervisor, I felt uncomfortable confronting her about its poor quality and requesting she redo it. So instead, I had to take on effort to do something that should have been her responsibility.”

The result follows a predictable pattern. The sender saves a few minutes of effort while the receiver inherits hours of work trying to decipher, edit, or simply redo the task from scratch. Artificial intelligence makes this productivity-draining behavior scalable, enabling the rapid creation of useless content with minimal effort.

Forward-thinking organizations combat this trend by fostering an AI culture of agency and co-creation. The foundational principle proves simple but profound: people support what they create. Instead of treating employees as passive consumers of AI-generated content, this approach transforms them into active builders of their own AI solutions. By leveraging accessible no-code and low-code platforms, non-technical employees from departments like marketing, HR, or operations can design and construct AI assistants tailored to their specific, real-world workflows.

When an employee builds a tool, they must think critically about the entire process. They define a successful outcome, structure the necessary inputs, and iteratively refine prompts to ensure quality and relevance. The AI transforms from a mysterious black box into a transparent partner in solving a well-understood problem. This hands-on experience not only demystifies the technology but also instills a deep sense of ownership over the quality of output. Low-code platforms empower workers to build confidence while learning AI’s inherent limitations, which paradoxically increases their comfort and makes them more discerning critical thinkers.

This capability-building model has proven effective in practice. Consider the example of a mid-sized law firm where paralegals began using public AI tools for first drafts of legal documents. The results often contained boilerplate language that missed crucial jurisdictional nuances, creating significant friction and wasting senior attorneys’ billable hours. After shifting to a co-creation model, a team of paralegals and junior associates built their own “Contract Review Assistant,” training it on a curated library of the firm’s most successful briefs and contracts. The AI-assisted drafts became highly aligned with the firm’s standards from the start, reducing senior review time by 55 percent and recovering over 3,200 billable hours annually.

In another instance, a manufacturing company’s quality control team initially used a generic data analysis tool that produced reports too high-level to be actionable. The team then participated in a workshop where they built their own “Defect Tracking Bot,” designing it to cross-reference sensor data with maintenance logs. Because they designed the logic themselves, the bot produced specific, actionable insights, contributing to $1.2 million in savings from productivity gains and defect reduction in its first year.

Eradicating workslop requires a deliberate strategic shift from leadership. This begins by prioritizing strategy over tools, conducting readiness assessments to understand cultural and technical gaps before deploying any platform. Leaders should cultivate agency rather than dependency by framing AI as a set of building blocks and investing in training non-technical teams to create their own solutions.

Organizations must establish clear quality standards by facilitating team conversations about what constitutes high-quality, AI-assisted work and creating communities of practice where employees share best practices. Finally, measuring outcomes rather than just activity becomes crucial. Incentives should reward substantive results like tools that solve real problems, improve team collaboration, or enhance deliverable quality, not vanity metrics like the number of AI prompts generated.

Workslop signals a shallow and ultimately ineffective AI implementation. The path forward lies not in finding a better AI model but in adopting a more thoughtful leadership approach that empowers people, fosters a culture of creation, and builds genuine, human-centric AI capability. This represents the only sustainable path to turning technological potential into a true, defensible competitive advantage.

Key Take-Away

Workslop exposes the dark side of rushed AI adoption—polished but empty output that drains productivity and trust. The cure isn’t better tech, but empowering people to co-create AI tools with purpose, ownership, and real-world impact. Share on X

Image credit: Tima Miroshnichenko/pexels


Dr. Gleb Tsipursky, called the “Office Whisperer” by The New York Times, helps tech-forward leaders replace overpriced vendors with staff-built AI solutions. He serves as the CEO of the future-of-work consultancy Disaster Avoidance Experts. Dr. Gleb wrote seven best-selling books, and his forthcoming book with Georgetown University Press is The Psychology of Generative AI Adoption (2026). His most recent best-seller is ChatGPT for Leaders and Content Creators: Unlocking the Potential of Generative AI (Intentional Insights, 2023). His cutting-edge thought leadership was featured in over 650 articles and 550 interviews in Harvard Business Review, Inc. Magazine, USA Today, CBS News, Fox News, Time, Business Insider, Fortune, The New York Times, and elsewhere. His writing was translated into Chinese, Spanish, Russian, Polish, Korean, French, Vietnamese, German, and other languages. His expertise comes from over 20 years of consulting, coaching, and speaking and training for Fortune 500 companies from Aflac to Xerox. It also comes from over 15 years in academia as a behavioral scientist, with 8 years as a lecturer at UNC-Chapel Hill and 7 years as a professor at Ohio State. A proud Ukrainian American, Dr. Gleb lives in Columbus, Ohio.