
Excerpted from a Constangy Brooks Smith and Prophete Blog by Claire Bowen and Anna Schall Kreamer
Turning AI inside out.
Just as organizations use artificial intelligence to monitor the workplace, they must also monitor themselves and, as a result, reshape their risk, governance, and security expectations. AI must be managed from two directions: from the inside out, ensuring that organizations use AI in ways that preserve trust, and outside in, securing the AI systems against external threats.
Defining “AI” in an enterprise context
“AI” is frequently used as a catch‑all term for anything automated or technology‑driven. This obscures meaningful differences in risk, control, and regulatory treatment. For effective governance, organizations must distinguish between traditional automation, predictive models, generative systems, and more advanced agentic architectures.
This precision will help organizations with accurate risk assessments, appropriate control design, and credible external disclosures. From a technological perspective, most enterprises use generative AI and chatbots as assistants for content generation, summaries, and analysis. These tools can greatly improve speed and scale, but they can also expand opportunities for unauthorized access and attacks.
As AI interacts with sensitive data, connects to internal systems, and responds to user prompts that are susceptible to prompt-based manipulation, they may create new pathways for exploitation and expand the organization’s attack surface and therefore vulnerability. For example, in 2025 security researchers discovered a vulnerability affecting Microsoft Copilot in which individuals could embed instructions in emails. The instructions were invisible to humans but readable by the AI assistant.
Agentic AI has been described by one author as “a new breed of AI systems that are semi- or fully autonomous and thus able to perceive, reason, and act on their own.” Agentic AI takes these risks a step further by orchestrating sequences of actions across an organization’s tools and systems, which can accelerate and increase the impact of misconfiguration, privilege issues, and misuse.
It’s important to note that legal and technical terminology are not always consistent. Many laws distinguish between AI systems broadly and “automated decision-making (ADM)” or “automated decision-making technology (ADMT)” when decisions materially affect individuals’ rights or opportunities. However, even within the legal realm, AI definitions differ. For example, the California Consumer Privacy Act defines ADMT as technology that processes personal information in a way that implicates human decision making. On the other hand, the Colorado AI Act focuses on “high-risk AI systems” used to make or significantly influence consequential decision making about individuals.
From the inside out: Using AI while preserving trust
Looking at AI from the inside out means examining the ways that AI is deployed within the organization and how that use affects employee relations, customer trust, and third‑party expectations. Many regulatory and policy frameworks take risk‑based approaches that focus on and calibrate controls around context, potential harm, and autonomy of the organization’s system. Common principles include transparency and disclosure, pre‑deployment and ongoing testing, accountability and documentation, and protections for autonomy and privacy.
Key internal legal and compliance concerns include the following:
- When using AI to monitor performance, don’t cross the line into intrusive employee surveillance.
- Do what is necessary to prevent leaks of confidential information, including personal data, intellectual property, and sensitive business information, each of which may have distinct regulatory and contractual protections.
- Preserve consumer and stakeholder trust by clearly signaling when chatbots or automated decision-making tools are in use and by providing meaningful avenues for communication, explanation, and dispute resolution.
- Manage supply‑chain risks by treating AI vendors and embedded AI services as critical third parties subject to structured risk assessments, contractual safeguards, and ongoing oversight.
Many significant operational risks due to human error can arise after AI is deployed. These can include misuse of AI, overreliance on AI, policy violations, and misalignment between intended and actual use. A mature AI governance program couples technical controls with training, access management, policy adherence monitoring, incident detection and response planning, and robust audit trail implementation.
For the full story, please click here.