How Do You Actually Teach AI Skills?

With a recent Pew Research report showing that 50% of Americans are more concerned than excited about AI, and only 10% more excited than concerned, employers who want employees to learn AI skills have a tough slog ahead. Here’s an idea: if you want employees to adopt generative AI, put them in a room, give them real data, and require working demos by the end of the sprint.
The strongest case for this format comes from how learning and adoption actually happen. A widely cited active learning meta-analysis found higher performance and fewer failures when people solve problems directly rather than listen to lectures. A rigorous project-based learning review ties artifact creation to gains in achievement. A 2025 working paper on generative AI in customer support measured a 14 to 15 percent productivity lift with larger gains for novices, evidence that guided tools and exemplars accelerate capability on the job. Organizational change also follows visibility and peer proof, which is the core of the diffusion literature synthesized in a classic diffusion analysis. Put people together, give them governed data, and require five-minute demos to executive judges. That is the fastest way to move from interest to working software and from working software to credible pilots.
Time pressure and coaching focus attention. A recent hackathon systematic review maps how short, intense builds drive teamwork, problem solving, and persistence when organizers set clear goals and provide structure. A complementary educational evaluation reaches similar conclusions and highlights the value of facilitators who unblock teams during the sprint. The result is not only faster skill acquisition but also a clearer path to standardization, governance, and scale.
The American Society for Nondestructive Testing turned the research into a program that shipped results in public. Its 2025 conference positioned an AI Agent Battle as a marquee experience on the official agenda, with a dedicated session description spelling out a two-day, build-and-compete format tied to practical NDT workflows. The broaderevents hub framed the week as hands-on and technology forward. ASNT primed the field before the showdown through a public webinar that introduced agent patterns, build steps, and governance expectations, which lowered activation energy for first-time builders.
The structure mattered. Attendees did not sit for long lectures. They built agents tied to real inspection tasks, iterated in public, and showed results on a deadline. That format aligns with strong evidence that active learning outperforms lecture-first instruction, including a well-cited meta-analysis that found higher performance and lower failure rates when learners engage directly with problems. Reviews of project-based learning show similar gains, as documented in a recent higher-education review and a science-education meta-analysis. Research on hackathon-style builds also points to improved teamwork, problem solving, and persistence when the event is time-bounded and well coached, as summarized in a 2024 systematic review and a complementary educational evaluation.
As Barry Schieferstein, the Chief Operating Officer of ASNT noted after the event:
- “I was struck by how the AI Agent Challenge transformed what a conference experience can be. Instead of talking about innovation, our members were building it, creating real AI agents that connect directly to nondestructive testing practice. For ASNT, this was more than a workshop; it was a statement about how associations can lead their industries into the future. We proved that hands-on, coached learning not only transfers skills faster but also creates deeper engagement for members and sponsors alike. It showed that associations can be at the forefront of applied technology, not just in what we teach but in how we learn together.”
So how should business and government leaders adapt this model? Start by promising what matters to executives: working demos on a clock that address real workflows. Publish an internal schedule that mirrors ASNT’s public agenda, including the sprint start, the demo window, and the judging criteria. Staff expert facilitators to roam as unblockers rather than lecturers. Offer a short pre-brief a week before the build that mirrors ASNT’s preparatory webinar, where you introduce three agent patterns your business needs and review data guardrails. Provide a sandbox that mirrors production constraints and preload governed, redacted, or synthetic datasets so teams can build safely without waiting on approvals.
Treat the workshop like a product launch, not a class. Give it a name, publish rules, and state deliverables up front. Require three artifacts from every team by the final bell: a short problem statement, a must-have capability checklist, and a data access plan that names sources and permissions. Record every demo and publish them on an internal portal. Tag entries by workflow and data domain, and include a lightweight request form for productionization. Commit to a two-week decision window for the strongest prototypes to move into controlled pilots. As teams progress, connect their outcomes to the enterprise business case with the same clarity that the generative AI productivity working paper uses to report throughput gains.
Close the loop before momentum fades. Ask each team to submit a one-page risk register that captures data dependencies, security exposures, and monitoring needs. Stand up a lightweight review that approves top prototypes for pilots. Begin the next quarter’s build with quick updates from prior winners showing movement on cycle time, defect rates, or satisfaction. Over time, you build a library of approved, reusable agents and a standing competition that sources the next candidates. The effect is cumulative. The diffusion analysis predicts faster uptake when exemplars are visible, while the hackathon systematic review and the active learning meta-analysis explain why coached, time-boxed practice makes those exemplars stick.
A well-run build workshop is not theater. It is an evidence-backed way to translate generative AI from headlines into operating leverage. The research behind active learning, project-based practice, and hackathon design favors coached, time-bounded builds that culminate in visible demos. The ASNT AI Agent Battle shows how to stage the format at scale. Leaders who adopt this model will leave not with slide decks but with demos, data plans, and pilots they can fund immediately. That is how education becomes deployment and deployment becomes sustained competitiveness.
Key Take-Away
To accelerate adoption and impact, teach AI skills through hands-on, coached, time-boxed builds that produce real demos, connect to workflows, and make learning visible and actionable. Share on XImage credit: fauxels/pexels
Dr. Gleb Tsipursky, called the “Office Whisperer” by The New York Times, helps tech-forward leaders replace overpriced vendors with staff-built AI solutions. He serves as the CEO of the future-of-work consultancy Disaster Avoidance Experts. Dr. Gleb wrote seven best-selling books, and his forthcoming book with Georgetown University Press is The Psychology of Generative AI Adoption (2026). Prior to that, he wrote ChatGPT for Leaders and Content Creators (2023). His cutting-edge thought leadership was featured in over 650 articles in prominent venues such as Harvard Business Review, Fortune, and Fast Company. His expertise comes from over 20 years of consulting for Fortune 500 companies from Aflac to Xerox and over 15 years in academia as a behavioral scientist at UNC-Chapel Hill and Ohio State. A proud Ukrainian American, Dr. Gleb lives in Columbus, Ohio.