On March 21, Australian regional mayor, Brian Hood, announced his plans to take legal action against OpenAI over an alleged error generated by ChatGPT, an AI-powered language tool. Hood’s legal team claims that the tool has the potential to damage his reputation by falsely claiming that he was convicted for bribery.

ChatGPT is an automated language product that utilizes artificial intelligence to generate text. The tool has faced previous criticism for factual errors and bias, highlighting the need for improved accuracy and ethics in AI-powered language tools.

In response to the alleged error, Hood has given OpenAI 28 days to modify ChatGPT’s responses and prevent the tool from spreading disinformation, or face legal action. If Hood sues, it would be the first time a person has sued the owner of ChatGPT for claims made by the automated language product.

Hood’s reputation is crucial to his role as a regional mayor, and the defamation lawsuit may be his only course of action if the alleged ChatGPT-generated errors are not corrected. Hood’s lawyers have sent a letter of concern to ChatGPT owner, OpenAI, demanding that the errors about their client be fixed.

OpenAI has not yet responded to the request for comment regarding the alleged error generated by ChatGPT. The outcome of this case will be closely watched by the technology industry, as it will set a precedent for the accountability of AI language tools and their impact on individuals and society.