Free Tools

Free resources from the KonaSense team

Hands-on learning and vendor research to help security, AI, and governance teams understand real AI risk. No sign-up. Updated as the landscape changes.

Interactive Learning
Free

Understand prompt injection in under 20 minutes

A free, hands-on course that shows how attackers hijack LLMs with crafted instructions hidden in files, webpages, and tool outputs. Walk through real cases, practice in a live simulator, and leave with a defense playbook your team can actually use.

Five progressive modules

From the core vulnerability to real attack patterns across chat, RAG, and agent workflows.

Live attack simulator

Try direct and indirect prompt injection in a safe sandbox, then see exactly why it worked.

Defense playbook

Concrete controls you can apply today: content isolation, policy guardrails, and output filtering.

5

Modules

12+

Real attack cases

Free

No sign-up required

Vendor Data Benchmark
Free

See which LLM providers train on your data by default

A continuously updated matrix that compares consumer and enterprise plans across frontier AI providers. Check training defaults, retention windows, and opt-out requirements before your employees paste sensitive data into the wrong tier.

Training risk at a glance

Every entry is scored: trains by default, opt-in only, or safe by default across plans and modes.

Filter by provider, tier, and mode

Slice by ChatGPT, Claude, Gemini, Copilot, free vs enterprise, and chat vs API to find your exact exposure.

Backed by public research

Cross-referenced with Stanford HAI's privacy policy analysis so you can cite the evidence in reviews.

72

Entries tracked

39

Providers

32%

Train by default

Want the full platform behind these tools?

KonaSense brings browser observability, agent governance, and audit-ready evidence into one control plane. Talk to our team to see the full picture.