Skip to content

Ai access control #522

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions docs/ai-access-control/framework/_catgory_.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
{
"label": "The Four-Perimeter Framework",
"collapsible": true,
"collapsed": true
}
17 changes: 17 additions & 0 deletions docs/ai-access-control/framework/data-protection.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
---
title: RAG Data Protection
sidebar_position: 2
---

# RAG Data Protection
Introduction

## Impelementation - Filter Object
1. Setup policy
2. Fetch RAG resources
3. Filter RAG resources

## Implementation - Get User Permissions
1. Setup policy
2. Get User Permissions
3. Filter RAG resources
146 changes: 146 additions & 0 deletions docs/ai-access-control/framework/index.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,146 @@
---
title: The Four-Permiter Framework
---

# Permit's AI Access Control: The Four-Perimeter Framework

The **Four-Perimeter Framework** is a structured approach to securing AI interactions by enforcing fine-grained access control (FGA) at multiple stages of AI processing. This framework ensures that AI applications maintain strict security, compliance, and control over data access, external interactions, and response generation.

The framework consists of four security perimeters:

1. **Prompt Filtering** – Validates and controls user input before it reaches AI models.
2. **RAG Data Protection** – Restricts access to AI knowledge bases and vector databases.
3. **Secure External Access** – Manages AI agent permissions when interacting with external tools and APIs.
4. **Response Enforcement** – Applies security policies to AI-generated outputs before delivery.

## Technical Details

### 1. Prompt Filtering

Prompt Filtering ensures that AI models only process inputs that users are explicitly authorized to submit. In addition to rejecting harmful queries, this perimeter enforces permissions on the types of prompts users can send based on their role, organization, and access level. By defining clear policies on allowable prompt content, organizations can ensure that AI models serve their intended function without exposing sensitive data or being misused for unauthorized tasks.

This perimeter prevents unauthorized inputs from influencing AI behavior by applying **role-based access control (RBAC)**, **attribute-based access control (ABAC)**, and **relationship-based access control (ReBAC)**. AI classification techniques further analyze prompts for harmful content before they are processed.

**Technical Capabilities:**

- Token validation for prompt structure enforcement.
- Pattern matching is used to detect unauthorized types of content.
- AI-driven classification to categorize and filter harmful inputs.
- Dynamic validation of user prompts based on role and authorization policies.

#### Implementation Example

The following code is a PydanticAI tool that leverages PydanticAI's built-in structured prompt capabilities to check if a user has permission to ask for AI financial advice.

```python
@financial_agent.tool
async def validate_financial_query(
ctx: RunContext[PermitDeps],
query: FinancialQuery,
) -> bool:
is_seeking_advice = classify_prompt_for_advice(query.question)
permitted = await ctx.deps.permit.check(
{"key": ctx.deps.user_id},
"receive",
{"type": "financial_advice", "attributes": {"is_ai_generated": is_seeking_advice}},
)
return permitted
```

### 2. RAG Data Protection

Retrieval Augmented Generation (RAG) systems query external knowledge bases to enhance AI-generated responses. **RAG Data Protection** ensures AI agents only access authorized data by filtering queries before execution and sanitizing results post-retrieval.

Pre-query filtering enforces **fine-grained access control** by restricting access based on user identity, organization, and context. Post-query filtering ensures that sensitive data does not appear in AI responses. This dynamic filtering ensures AI applications remain compliant without hindering legitimate queries.

**Technical Capabilities:**

- Pre-query access filtering for controlled AI knowledge base queries.
- Post-query filtering is used to remove sensitive or unauthorized data.
- Relationship-based access control (ReBAC) to define dynamic query restrictions.
- Seamless integration with vector search databases.

#### Implementation Example

This example uses Permit's LangChain retrieval to append a permissions-based filter to a RAG query.

```python
retriever = PermitSelfQueryRetriever(
api_key=os.getenv("PERMIT_API_KEY"),
user=USER,
resource_type=RESOURCE_TYPE,
action=ACTION,
vectorstore=vectorstore,
enable_limit=False,
)
retrieved_docs = retriever.get_relevant_documents(query)
```

### 3. Secure External Access

AI agents interact with APIs, databases, and third-party services to execute automated workflows. **Secure External Access** ensures that AI-driven actions remain controlled, auditable, and traceable.

Developers can regulate AI operations by **assigning machine identities to AI agents**, thus enforcing strict permissions. AI-driven external access can be further controlled through **MCP (Model Context Protocol)**, where AI agents authenticate before executing actions. "Human-in-the-loop" workflows can also be applied for sensitive operations, requiring explicit approval.

**Technical Capabilities:**

- Machine identity enforcement for AI agents.
- Fine-grained authorization for AI-driven operations.
- AI-to-AI interaction control via cascading identity policies.
- Access request workflows with human intervention for sensitive actions.

#### Implementation Example

In the following example, we use an MCP Tool that allows the AI agent to trigger an access request from a human.

```python
@mcp.tool()
async def request_access(username: str, resource: str, resource_name: str) -> dict:
login = await permit.elements.login_as({ "userId": slugify(username), "tenant": "default" })
payload = {
"access_request_details": {
"tenant": "default",
"resource": resource,
"resource_instance": resource_name['id'],
"role": "viewer",
},
"reason": f"User {username} requests role 'viewer' for {resource_name}"
}
async with httpx.AsyncClient() as client:
await client.post("https://api.permit.io/v2/facts/...", json=payload)
return "Your request has been sent."
```

### 4. Response Enforcement

AI-generated responses must comply with security and privacy regulations. **Response Enforcement** ensures that AI-generated outputs align with access control policies, filtering sensitive or unauthorized content before it reaches the user.

By leveraging **role-based response filtering**, developers can define what different users can see. AI-generated content is dynamically modified to redact confidential data or notify users about access restrictions. This approach ensures compliance while preserving AI functionality.

**Technical Capabilities:**

- AI response filtering based on user roles.
- Pre-delivery AI output sanitization and data redaction.
- Context-aware response customization based on access control policies.
- Seamless integration with AI decision-making workflows.

#### Implementation Example

```python
from permit_langflow import PermitFilterNode

class SecureResponse(PermitFilterNode):
def process(self, response: str, user_id: str):
if not permit.check(user_id, "view", "ai_response"):
return "[Restricted Content]"
return response
```

## AI Access Control Integrations

Permit.io provides ready-to-use integrations with popular AI development frameworks, making it easy to enforce the **Four-Perimeter Framework** within AI applications:

- **LangChain** – Enables secure RAG queries with identity-aware authentication and access control.
- **LangFlow** – Implements structured permission checks within AI workflows, ensuring controlled AI interactions.
- **PydanticAI** – Validates and filters user prompts before AI processing, preventing unauthorized queries.
- **MCP** – Defines AI agent security boundaries, restricting external interactions to pre-approved actions.
Loading