As we all should know, artificial intelligence (AI) is rapidly becoming integrated into employer’s decision-making. Employers are using AI tools to streamline hiring, manage performance and, believe it or not, even to answer legal questions.

For employers who must contend with employees when making decisions, the use of AI raises serious concerns. Employers should be aware of the potential legal risks of AI tools.

In February 2026, two different courts issued judicial opinions regarding the use of AI in litigation. First, there was a decision in a criminal case issued in New York. Just seven days earlier, a court in Michigan issued a decision in a civil case that seemed to be the total opposite of what New York said. As usual, the daily march of our country’s legal highways can be confusing.

In the first case, the court ruled that documents a criminal defendant created through AI and sent to his attorney were not protected by the attorney-client privilege. The defendant did this on his own, without directions from his attorney. The court found that the AI tool was not a lawyer, that the terms of the platform did not create an expectation of privacy. The court also found that the documents prepared with AI were not prepared at the direction of the attorney and did not reflect a strategy. Therefore, the government was able to receive the documents related to the use of AI.

In the second case, the attorney had used ChatGPT to prepare legal briefs. During the discovery phase, opposing counsel asked the court to compel the party to provide those materials and the court denied the motion. The court held that the materials were protected under civil rules of procedure because they were made in preparation for litigation. Specifically, the court found that the party’s use of AI did not waive protection because AI platforms are “tools, not persons” and waiver of the protection requires disclosure to an opposing party.

The point these cases make is that it’s not AI that waives attorney-client privilege or work product protection, it’s how individuals use AI. So, it may not be good idea to use AI to help you do your own independent legal analysis.

AI tools are not lawyers and AI-generated responses are not a substitute for legal advice. While AI may generate a well-structured response, it’s also prone to producing fabricated information commonly referred to as “hallucinations.”

AI tools can generate responses that include misstatements of federal or local laws. They can fail to account for where your organization is located and include outdated legal standards.

As we’ve said many times, whether we like it or not AI is the future. Employers that take an informed approach will be in the best position to benefit from AI, while minimizing legal risk. Without proper safeguards, including consulting with legal counsel, relying solely on AI guidance can expose employers to significant legal risks. Ultimately, AI should augment legal advice from a competent attorney.

The information and opinions expressed are for educational purposes only and are based on current practice, industry-related knowledge and business expertise. The information provided shall not be construed as legal advice, express or implied.