Excessive agency in LLMs is when AI systems perform actions that were not intended for them to do. This could happen when AI systems are granted excessive functionality, permissions, or autonomy without guardrails. The result could be unwanted business, legal and financial consequences, and even potential security risks.
Examples of excessive agency include:
Excessive agency in LLMs arises from underlying issues in how these models are designed, trained and integrated into systems.
Core causes of excessive agency are:
Excessive agency can have significant implications for organizations and users alike. This can impact:
Addressing excessive agency risks requires a multi-layered approach that spans the all stages of LLM lifecycle management. This includes: