How to Rein in the AI Threat? Let the Lawyers Loose.

4 min read
ai threat

55% of Americans are worried by the threat of AI to the future of humanity, according to a recent Monmouth University poll. In an era where technological advancements are accelerating at breakneck speed, it is crucial to ensure that artificial intelligence (AI) development remains in check. As AI-powered chatbots like ChatGPT become increasingly integrated into our daily lives, it is high time we address potential legal and ethical implications. 

Video: “How to Rein in the AI Threat? Let the Lawyers Loose”

Podcast: “How to Rein in the AI Threat? Let the Lawyers Loose”

And some have done so. A recent letter signed by Elon Musk, who co-founded OpenAI, Steve Wozniak, the co-founder of Apple, and over 1,000 other AI experts and funders calls for a six-month pause in training new models. In turn, Time published an article by Eliezer Yudkowsky, the founder of the field of AI alignment, calling for a much more hard-line solution of a permanent global ban and international sanctions on any country pursuing AI research.

However, the problem with these proposals is that they require coordination of numerous stakeholders from a wide variety of companies and government figures. Let me share a more modest proposal that’s much more in line with our existing methods of reining in potentially threatening developments: legal liability. 

By leveraging legal liability, we can effectively slow AI development and make certain that these innovations align with our values and ethics. We can ensure that AI companies themselves promote safety and innovate in ways that minimize the threat they pose to society. We can ensure that AI tools are developed and used ethically and effectively, as I discuss in depth in my new book, ChatGPT for Thought Leaders and Content Creators: Unlocking the Potential of Generative AI for Innovative and Effective Content Creation.

Legal Liability: A Vital Tool for Regulating AI Development

Section 230 of the Communications Decency Act has long shielded internet platforms from liability for content created by users. However, as AI technology becomes more sophisticated, the line between content creators and content hosts blurs, raising questions about whether AI-powered platforms like ChatGPT should be held liable for the content they produce.

The introduction of legal liability for AI developers will compel companies to prioritize ethical considerations, ensuring that their AI products operate within the bounds of social norms and legal regulations. They will be forced to internalize what economists call negative externalities, meaning negative side effects of products or business activities that affect other parties. A negative externality might be loud music from a nightclub bothering neighbors. The threat of legal liability for negative externalities will effectively slow down AI development, providing ample time for reflection and the establishment of robust governance frameworks.

To curb the rapid, unchecked development of AI, it is essential to hold developers and companies accountable for the consequences of their creations. Legal liability encourages transparency and responsibility, pushing developers to prioritize the refinement of AI algorithms, reducing the risks of harmful outputs, and ensuring compliance with regulatory standards.

For example, an AI chatbot that perpetuates hate speech or misinformation could lead to significant social harm. A more advanced AI given the task of improving the stock of a company might – if not bound by ethical concerns – sabotage its competitors. By imposing legal liability on developers and companies, we create a potent incentive for them to invest in refining the technology to avoid such outcomes.

Legal liability, moreover, is much more doable than a six-month pause, not to speak of a permanent pause. It’s aligned with how we do things in America: instead of having the government regular business, we instead permit innovation but punish the negative consequences of harmful business activity.

The Benefits of Slowing Down AI Development

Ensuring Ethical AI: By slowing down AI development, we can take a deliberate approach to the integration of ethical principles in the design and deployment of AI systems. This will reduce the risk of bias, discrimination, and other ethical pitfalls that could have severe societal implications.

Avoiding Technological Unemployment: The rapid development of AI has the potential to disrupt labor markets, leading to widespread unemployment. By slowing down the pace of AI advancement, we provide time for labor markets to adapt and mitigate the risk of technological unemployment.

Strengthening Regulations: Regulating AI is a complex task that requires a comprehensive understanding of the technology and its implications. Slowing down AI development allows for the establishment of robust regulatory frameworks that address the challenges posed by AI effectively.

Fostering Public Trust: Introducing legal liability in AI development can help build public trust in these technologies. By demonstrating a commitment to transparency, accountability, and ethical considerations, companies can foster a positive relationship with the public, paving the way for a responsible and sustainable AI-driven future.

Concrete Steps to Implement Legal Liability in AI Development

Clarify Section 230: Section 230 does not appear to cover AI-generated content. The law outlines the term “information content provider” as referring to “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.” The definition of “development” of content “in part” remains somewhat ambiguous, but judicial rulings have determined that a platform cannot rely on Section 230 for protection if it supplies “pre-populated answers” so that it is “much more than a passive transmitter of information provided by others.” Thus, it’s highly likely that legal cases would find that AI-generated content would not be covered by Section 230: it would be helpful for those who want a slowdown of AI development to launch legal cases that would enable courts to clarify this matter. By clarifying that AI-generated content is not exempt from liability, we create a strong incentive for developers to exercise caution and ensure their creations meet ethical and legal standards.

Establish AI Governance Bodies: In the meantime, governments and private entities should collaborate to establish AI governance bodies that develop guidelines, regulations, and best practices for AI developers. These bodies can help monitor AI development and ensure compliance with established standards. Doing so would help manage legal liability and facilitate innovation within ethical bounds.

Encourage Collaboration: Fostering collaboration between AI developers, regulators, and ethicists is vital for the creation of comprehensive regulatory frameworks. By working together, stakeholders can develop guidelines that strike a balance between innovation and responsible AI development.

Educate the Public: Public awareness and understanding of AI technology are essential for effective regulation. By educating the public on the benefits and risks of AI, we can foster informed debates and discussions that drive the development of balanced and effective regulatory frameworks.

Develop Liability Insurance for AI Developers: Insurance companies should offer liability insurance for AI developers, incentivizing them to adopt best practices and adhere to established guidelines. This approach will help reduce the financial risks associated with potential legal liabilities and promote responsible AI development.

Conclusion

The increasing prominence of AI technologies like ChatGPT highlights the urgent need to address the ethical and legal implications of AI development. By harnessing legal liability as a tool to slow down AI development, we can create an environment that fosters responsible innovation, prioritizes ethical considerations, and minimizes the risks associated with these emerging technologies. It is essential that developers, companies, regulators, and the public come together to chart a responsible course for AI development that safeguards humanity’s best interests and promotes a sustainable, equitable future.

Key Take-Away

Implementing legal liability for AI developers is a crucial step to slow down AI development, prioritize ethical considerations, and ensure responsible innovation for a sustainable future. Share on X

Image credit: Tima Miroshnichenko/Pexels


Dr. Gleb Tsipursky was lauded as “Office Whisperer” and “Hybrid Expert” by The New York Times for helping leaders use hybrid work to improve retention and productivity while cutting costs. He serves as the CEO of the boutique future-of-work consultancy Disaster Avoidance Experts. Dr. Gleb wrote the first book on returning to the office and leading hybrid teams after the pandemic, his best-seller Returning to the Office and Leading Hybrid and Remote Teams: A Manual on Benchmarking to Best Practices for Competitive Advantage (Intentional Insights, 2021). He authored seven books in total, and is best know for his global bestseller, Never Go With Your Gut: How Pioneering Leaders Make the Best Decisions and Avoid Business Disasters (Career Press, 2019). His cutting-edge thought leadership was featured in over 650 articles and 550 interviews in Harvard Business Review, Forbes, Inc. Magazine, USA Today, CBS News, Fox News, Time, Business Insider, Fortune, and elsewhere. His writing was translated into Chinese, Korean, German, Russian, Polish, Spanish, French, and other languages. His expertise comes from over 20 years of consulting, coaching, and speaking and training for Fortune 500 companies from Aflac to Xerox. It also comes from over 15 years in academia as a behavioral scientist, with 8 years as a lecturer at UNC-Chapel Hill and 7 years as a professor at Ohio State. A proud Ukrainian American, Dr. Gleb lives in Columbus, Ohio. In his free time, he makes sure to spend abundant quality time with his wife to avoid his personal life turning into a disaster. Contact him at Gleb[at]DisasterAvoidanceExperts[dot]com, follow him on LinkedIn @dr-gleb-tsipursky, Twitter @gleb_tsipursky, Instagram @dr_gleb_tsipursky, Facebook @DrGlebTsipursky, Medium @dr_gleb_tsipursky, YouTube, and RSS, and get a free copy of the Assessment on Dangerous Judgment Errors in the Workplace by signing up for the free Wise Decision Maker Course at https://disasteravoidanceexperts.com/newsletter/.