Should Associations “Move Fast and Break Things” With Generative AI?

4 min read
With Generative AI

In Silicon Valley, the mantra “move fast and break things” was once seen as a bold rallying cry for innovation. But as associations begin exploring the integration of Generative AI (Gen AI) into member services, professional development, and operations, this reckless pace may do more harm than good. Associations are not startups—our stakeholders are members, volunteers, chapter leaders, and professionals who rely on us for guidance, trust, and stability. For association executives, the challenge is not just to innovate but to innovate responsibly, inclusively, and ethically.

A recent survey by the Artificial Intelligence Policy Institute (AIPI) shows just how far public perception has shifted. A staggering 72% of Americans want AI development to slow down. For associations—often the trusted standard-bearers for entire professions—this signals an urgent need to proceed with caution and transparency.

Gen AI Anxiety: A Widespread, Cross-Sector Concern

The AIPI data reflects concerns that go beyond political or professional divides. With 86% of Americans believing AI could cause catastrophic harm and 76% fearing it could one day threaten human existence, these are not fringe worries. These are mainstream anxieties, and they are mirrored within our associations—from volunteer leaders to members attending AI-related webinars or continuing education.

Unlike corporate environments, associations serve as both internal stewards (managing staff, volunteers, and chapters) and external leaders (setting standards, certifying knowledge, and advocating for the profession). The introduction of Gen AI touches every aspect of this dual role.

For example, members may wonder: Will AI replace their certification process? Could it write continuing education materials without input from human experts? Will it be used to evaluate their job performance or credentials?

These are not theoretical questions. In my consulting work with associations, I frequently encounter board members, staff, and chapter leaders who express legitimate fears about AI’s ethical implications, fairness, and impact on their professional identity. They want to understand what Gen AI means for governance, for member trust, and for their profession’s future.

Case Study: A National Manufacturing Association’s Thoughtful Approach to Gen AI

I recently worked with a national manufacturing association exploring how Gen AI could support both internal operations and member-facing services. The leadership team was intrigued by AI’s potential to streamline document drafting, automate member inquiries, and personalize continuing education.

But they were also deeply aware of the risks—particularly around bias in AI-generated content and the possibility of alienating members who felt that AI diminished professional judgment.

Rather than “move fast,” this association chose to “move thoughtfully.” We began with staff and volunteer leadership training sessions on AI fundamentals. Then we facilitated working groups with chapter leaders to discuss potential use cases and ethical boundaries. Importantly, we invited members into the conversation through surveys and town halls. This wasn’t just good optics—it was governance in action.

The result? The association piloted an AI-supported help desk for member services—but kept a human in the loop to ensure accuracy and empathy. They also created a cross-functional AI Ethics Committee, including member representatives, to vet future applications. This approach reinforced the association’s credibility, empowered chapters to feel aligned with national leadership, and strengthened trust across the membership base.

Internal Dynamics: Staff and Volunteer Concerns About AI

Association executives know that trust and alignment begin at home—with staff and volunteers. As the engine behind programs, communications, and credentialing, internal teams must understand and feel confident about Gen AI.

But many don’t. In my workshops, I hear the same questions repeatedly:

  • “Will AI take my job?”
  • “Can we trust it to write for us or represent our voice?”
  • “Who’s accountable if it makes a mistake?”

Associations can’t afford to dismiss these concerns. Staff morale, volunteer engagement, and mission alignment depend on clarity and inclusion. That’s why I recommend associations adopt an AI transparency framework—something I help my clients create—to ensure staff and volunteers are informed about:

  • Where AI is being used
  • What decisions AI influences (if any)
  • Who oversees AI outputs
  • How AI aligns with organizational values

This framework is especially vital for associations with chapters, sections, or state affiliates. If national rolls out AI tools without including regional leaders in the planning, fragmentation and mistrust can quickly follow.

External Role: Setting Standards and Safeguarding Public Trust

Beyond internal alignment, associations are stewards of the public good. Whether it’s a medical board certifying physicians or a trade association advocating for best practices, associations define the ethical and professional guardrails of entire fields. How associations use Gen AI sends a signal to the broader public—and to legislators—about what responsible innovation looks like.

The fear of AI-enabled errors, misinformation, or bias is especially pronounced in fields like healthcare, education, and finance. If an AI-generated recommendation leads to harm, will members or the public blame the association? Will trust in the profession—and the association—erode?

This is why associations must lead with integrity. Rather than simply adopting Gen AI, they must also develop:

  • Guidance for members on ethical AI use within the profession
  • Training programs that help members critically evaluate AI-generated content
  • Policy advocacy that promotes responsible regulation aligned with professional standards

In this way, associations don’t just respond to Gen AI—they shape the conversation around it.

Managing Member Expectations: Communication and Inclusion

One of the most effective tools associations have is their voice. Members look to their association for clarity amid confusion. When it comes to Gen AI, silence is not neutral—it can be perceived as complicity or confusion.

Proactive communication is key. This includes:

  • Publishing position statements on AI use within the profession
  • Hosting webinars or panels with diverse member perspectives
  • Sharing case studies or pilot projects transparently, including successes and failures

Members want to feel that their concerns are acknowledged, and their professional identities respected. Including them in the AI journey builds belonging, not resistance.

Global Implications: Aligning With International Norms

Many associations operate internationally or collaborate across borders. The AIPI poll shows that 70% of Americans want global coordination to prevent AI-related catastrophe—on par with concerns about nuclear war or pandemics.

Associations should participate in cross-sector, international efforts to establish AI safety standards, especially in areas like ethics, privacy, and data security. Whether you’re part of a global federation or a U.S.-based body with international members, aligning your AI policies with international norms can reinforce your reputation and credibility.

Conclusion: Move Thoughtfully, Not Recklessly

Associations are not tech startups. We are the trusted conveners, the educators, the stewards of our professions. The “move fast and break things” approach is ill-suited to organizations whose greatest asset is trust.

The path forward is clear:

  • Slow down and listen—especially to members, volunteers, and staff
  • Build inclusive processes that invite dialogue and dissent
  • Prioritize education and transparency internally and externally
  • Align innovation with mission, not just efficiency

Gen AI holds tremendous promise for associations—but only if deployed with care, collaboration, and a deep respect for the people we serve.

Key Take-Away

With Generative AI, associations must prioritize thoughtful, transparent integration that aligns with their mission—balancing innovation with trust, ethics, and member engagement to lead responsibly in a rapidly evolving landscape. Share on X

Image credit: Antoni Shkraba Studio/pexels


Dr. Gleb Tsipursky was named “Office Whisperer” by The New York Times for helping leaders overcome frustrations with hybrid work and Generative AI. He serves as the CEO of the future-of-work consultancy Disaster Avoidance Experts. Dr. Gleb wrote seven best-selling books, and his two most recent ones are Returning to the Office and Leading Hybrid and Remote Teams and ChatGPT for Thought Leaders and Content Creators: Unlocking the Potential of Generative AI for Innovative and Effective Content Creation. His cutting-edge thought leadership was featured in over 650 articles and 550 interviews in Harvard Business Review, Inc. Magazine, USA Today, CBS News, Fox News, Time, Business Insider, Fortune, The New York Times, and elsewhere. His writing was translated into Chinese, Spanish, Russian, Polish, Korean, French, Vietnamese, German, and other languages. His expertise comes from over 20 years of consulting, coaching, and speaking and training for Fortune 500 companies from Aflac to Xerox. It also comes from over 15 years in academia as a behavioral scientist, with 8 years as a lecturer at UNC-Chapel Hill and 7 years as a professor at Ohio State. A proud Ukrainian American, Dr. Gleb lives in Columbus, Ohio.