Drsti.AI

Frequently Asked Questions

Everything you need to know about Drsti Studio — whether you're running AI Assists, building them, or deploying them across your organization.

General

Core questions about Drsti Studio platform

Drsti Studio is a governed, multi-tenant AI agent platform. It enables three audiences:

  • Consumers — run specialized AI Assists to solve problems
  • Creators — build, publish, and monetize AI Assists
  • Enterprises — deploy them across teams with policy controls, audit trails, and tenant-isolated subdomains

Think of it as an app store for AI agents, with enterprise-grade governance built in from day one.

ChatGPT and Claude are general-purpose chat interfaces. Drsti Studio is an execution platform where AI agents take real actions:

  • Multi-step tool execution built-in via MCP protocol
  • Enterprise governance with policies and audit trails
  • Multi-tenant isolation with dedicated subdomains per organization
  • Creator monetization with 70/30 revenue split marketplace
  • Cost optimization using Plan-First architecture (3–5x cheaper)
  • Team billing with shared credit pools instead of per-user seats

For individuals:

  • Free — 5 runs/month, no credit card required
  • Starter — $9.99/mo, 50 runs
  • Pro — $29.99/mo, 500 runs
  • Business — $99.99/mo, unlimited runs

For enterprises: seat-based pricing starting at $9.99/seat/month with shared credit pools and volume discounts. Custom enterprise plans available.

AI Assists are intelligent agents built on Drsti Studio. Unlike simple chatbots, Assists autonomously execute multi-step plans — calling external tools, querying data sources, and making decisions.

  • Creators build them
  • Consumers run them
  • Enterprises govern them

Yes. Creators build AI Assists, publish them to the marketplace, and earn 70% of revenue from every consumer run.

  • You keep your domain expertise
  • Drsti handles infrastructure, billing, and distribution
  • No setup fees — publish and start earning immediately

MCP (Model Context Protocol) is an open standard for AI models to call external tools safely. Drsti Studio uses MCP as the universal connector between AI agents and:

  • APIs and databases
  • SaaS tools
  • Enterprise systems

Creators register MCP tool servers, and agents call them at runtime.

Drsti Studio implements multiple layers of protection:

  • Encryption at rest and in transit
  • Tenant-isolated environments
  • Secure credential storage
  • Immutable audit logs
  • Policy-gated tool access

The platform never stores raw enterprise data — it accesses information on-demand through MCP tools and discards it after the session.

Enterprises get:

  • Dedicated subdomain with your branding
  • Shared credit pools with per-seat billing
  • Workspace-based team organization
  • Admin-controlled tool policies
  • Approval gates for sensitive actions
  • Full audit trails

BYOK support for LLM providers and external credentials is coming soon.

BYOK is on our roadmap. When available, you will be able to bring your own API keys for:

  • OpenAI
  • Anthropic
  • Azure OpenAI
  • Google Gemini
  • External services

All credentials will be stored with enterprise-grade encryption and admin-proof access controls.

Drsti Studio supports multiple LLM providers:

  • AWS Bedrock
  • Anthropic
  • OpenAI
  • Azure OpenAI
  • Google Gemini

Administrators can configure model routing strategies to optimize cost and performance across different use cases.

For Consumers

Getting started, data safety, and pricing for individual users

  1. Sign up at studio.drsti.ai — free account, no credit card required
  2. Browse the Assist catalog for agents that match your needs
  3. Run an Assist by describing your intent in natural language
  4. Review results — see the agent's reasoning, tool calls, and final output

The Free tier includes 5 runs per month so you can try before you commit.

No. Consumers interact with AI Assists using plain natural language. Describe what you want to accomplish, and the Assist figures out the technical steps. You can see what's happening behind the scenes (tool calls, reasoning), but you don't need to configure anything.

  1. Planning — An AI model analyzes your intent and creates an execution plan
  2. Execution — The plan is carried out step by step, calling tools as needed
  3. Streaming — You see results in real-time as tools return data
  4. Output — The Assist presents a final, formatted answer

This Plan-First architecture is significantly cheaper than traditional agent loops — lower costs passed on to you.

Yes. Every Assist has a permissions disclosure that lists:

  • Which MCP tool servers it connects to
  • What types of data it can access
  • Whether it requires any sensitive permissions

You can review this before running. Enterprise admins can further restrict which tools are allowed via policies.

Yes. Drsti Studio:

  • Encrypts all inputs and outputs at rest (AWS KMS) and in transit (TLS)
  • Does not store raw data beyond the session unless you choose to save results
  • Does not train AI models on your data
  • Isolates your runs from other users

See the Security & Compliance section for full details.

For Creators

Building Assists, monetization, and tool integration

  1. Define the agent — Write system instructions describing the Assist's purpose and behavior
  2. Assign tools — Select which MCP tool servers the Assist can use (platform tools or your own)
  3. Configure settings — Set output format, timeout limits, and execution mode
  4. Test — Run your Assist in draft mode to validate behavior
  5. Publish — Create an immutable version and list it in the catalog

No infrastructure management needed — Drsti handles hosting, scaling, billing, and governance.

  • Revenue split — You keep 70%, Drsti keeps 30%
  • Usage-based — Earn on every consumer run, not per subscription
  • Transparent tracking — Real-time earnings dashboard with run counts and engagement
  • Payouts — Via Stripe Connect, direct to your bank account
  • No setup fees — Publish and start earning immediately

Yes. Creators can register custom MCP tool servers that connect agents to any external service. Tools are stateless HTTP endpoints — deploy on any infrastructure:

  • AWS EC2 or Lambda
  • Your own servers
  • Any HTTP-accessible endpoint

The platform handles discovery, auth, and error handling.

Bring Your Own Key (BYOK) is a planned feature that will let you use your own API credentials for LLM providers (OpenAI, Anthropic, Azure, or Gemini) and external services. When available, your keys will be stored with enterprise-grade encryption and admin-proof access controls.

Yes. When publishing, choose a visibility level:

  • Private — Only you can use it
  • Team — Shared with your workspace members
  • Organization — Available across your enterprise
  • Public — Listed in the marketplace for all consumers

Enterprise admins can further restrict which Assists are available in their organization through policies.

For Enterprises

Multi-tenant deployment, governance, billing, and team management

  1. Dedicated subdomain with your branding
  2. Organization admin portal — Manage members, roles, billing, and policies
  3. Workspaces — Organize teams and departments with scoped access
  4. Shared credit pool — All members draw from one budget with per-user cost tracking
  5. Governance controls — Tool policies, approval gates, and audit trails
  6. BYOK support — Bring your own LLM providers and external credentials

Setup is straightforward: create a tenant workspace, configure policies, and deploy Assists.

Every enterprise gets a dedicated tenant context with complete isolation:

  • Subdomain routing — Your dedicated subdomain resolves to your tenant space
  • Data partitioning — All queries are scoped to your organization
  • No cross-tenant leakage — Agents, runs, tools, and credentials are invisible to other tenants
  • Separate billing — Your credit pool is independent from other organizations
  • Admin boundary — Your admins manage only your organization
  • Tool policies — Allowlist or denylist which MCP tools agents can use
  • Data class restrictions — Block tools that access PII, financial, or health data
  • Approval gates — Require human approval for sensitive actions
  • Immutable audit trail — Every tool call logged with user, parameters, outcome, and timestamp
  • Assist versioning — Pin agents to specific versions for reproducibility and compliance
  • Role-based access — Admin, Creator, and Consumer roles with scoped permissions

Yes. Enterprise BYOK supports:

  • AWS Bedrock
  • OpenAI
  • Azure OpenAI
  • Google Gemini
  • Anthropic Direct

Administrators can configure model routing strategies to optimize cost and quality per use case.

Enterprise billing uses a shared credit pool model:

  • Per-seat pricing — Base cost per team member
  • Shared pool — All members draw from one org-wide budget
  • Cost attribution — Reports show spending by user, workspace, and Assist
  • Budget controls — Set spending limits per department
  • Consolidated invoicing — Single org invoice via Stripe (NET-30 for enterprise)

Contact us for detailed pricing.

Absolutely. Enterprise teams can:

  • Build private Assists visible only within the organization
  • Connect to internal data sources via MCP tools (databases, APIs, SaaS tools)
  • Set workspace-scoped access so only specific departments see certain Assists
  • Keep data internal — enterprise Assists run in your tenant with your credentials
  • Web dashboard — Available now at your enterprise subdomain
  • API access — Headless integration for custom applications (available now)
  • Slack integration — Run Assists via Slack commands (coming soon)
  • Microsoft Teams — Run Assists from Teams conversations (coming soon)

Security & Compliance

Encryption, audit trails, certifications, and data handling

  • Encryption at rest for all stored data
  • Encryption in transit (TLS 1.2+) for all API communication
  • Application-level encryption for sensitive fields with tenant-scoped context
  • Configurable data retention — 7–365 days, or custom policy
  • Admin-controlled purge — delete historical data on demand
  • GDPR support
  • Append-only audit trails
  • SOC 2 Type II — actively pursuing
  • BYOK for LLM provider credentials — on roadmap

Enterprise customers can request compliance documentation and evidence packages by contacting our team.

All credentials (MCP server secrets and OAuth tokens) are stored with enterprise-grade encryption, separate from application data:

  • Admin-proof — Platform administrators cannot read credential values
  • Key rotation supported without application changes
  • Audit-logged — All credential access events are recorded
  • Never stored in the application database
  • Never logged in plaintext
  • Never included in API responses

No. Drsti Studio does not use your data to train any AI models:

  • Your data is processed in real-time and subject to your configured retention policy
  • Named LLM providers (Bedrock, OpenAI, etc.) are used transparently
  • No shadow routing to unknown models

Every action generates an audit record:

  • Tool invocations — Which user ran which tool, with what parameters, and the outcome
  • Policy decisions — Every allow/deny decision with the rule that triggered it
  • Approval gates — Who requested, who approved/denied, with timestamps
  • Admin actions — Member changes, policy updates, configuration changes

Logs are append-only (cannot be modified), tenant-scoped, and queryable via the admin dashboard. Retention is configurable for enterprise customers.

Pricing & Billing

Free tier, credit system, and individual/enterprise pricing plans

Yes. The Free tier includes:

  • 5 AI Assist runs per month
  • Access to all public Assists in the catalog
  • No credit card required to sign up
  • Full streaming output with agent reasoning visible

Upgrade anytime to Starter, Pro, or Business for more runs and features.

  • 1 credit ≈ 1 Assist run (varies by agent complexity and tool usage)
  • Credits are consumed on execution, not per message or per query
  • Your balance is visible in real-time during and after each run
  • Unused credits roll over within the billing period (monthly reset for subscriptions)
  • Free — $0 · 5 runs/month · Try out Assists
  • Starter — $9.99/month · 50 runs · Regular individual use
  • Pro — $29.99/month · 500 runs · Power users
  • Business — $99.99/month · Unlimited runs · Teams and heavy use

All paid tiers include unlimited AI model access (no per-token surcharges).

Enterprise billing adds:

  • Shared credit pools — One budget for the entire organization
  • Per-seat pricing — Base cost per team member
  • Cost attribution — Track spending by user, workspace, and Assist
  • Budget controls — Set spending limits per department
  • Consolidated invoicing — Single org invoice via Stripe (NET-30 for enterprise)
  1. Start with Free tier — Individual account, no commitment
  2. Upgrade to Pro — Test with your team informally
  3. Request an enterprise trial — Dedicated subdomain with full governance features
  4. Scale — Move to seat-based billing when ready

Contact hello@drsti.ai for enterprise trial requests.

Technical

Architecture, API access, regions, and technical implementation details

  • Fully serverless on AWS — auto-scaling by default, no servers to manage
  • Multi-provider AI — AWS Bedrock, OpenAI, Azure, Gemini, Anthropic
  • MCP-powered agents for safe, auditable tool execution
  • Global CDN for low-latency delivery
  • OAuth 2.0 + JWT authentication
  • Infrastructure as code for reproducible deployments
  1. A reasoning model creates a detailed execution plan — runs once per request
  2. A fast model executes each step — cheaper per invocation
  3. Result: significant cost reduction vs. using a single expensive model for everything

This is transparent to users — you get smart planning with fast execution at a fraction of the cost.

Yes. Drsti Studio exposes a RESTful API:

  • Run Assists — Trigger agent runs from your applications
  • Manage tools — Register and configure MCP servers
  • Query results — Retrieve run outputs and status
  • Admin operations — Manage members, billing, and policies (enterprise)

Authentication uses JWT tokens. Full API documentation is available to registered users.

Drsti Studio currently operates in a US-based AWS region. Multi-region support is on the roadmap. All data is processed and stored within the deployed region.

SSO/SAML is on the roadmap. Currently supported:

  • Email + password
  • Google OAuth
  • Microsoft OAuth
  • Facebook OAuth

Enterprise customers requiring SSO — contact hello@drsti.ai.

Still have questions?

Contact Team Drsti or request a demo to learn more.