Review Phase . API Onboarding

API Setup for cite.review

One key is required to make the verifier work. First, cite.review checks citations against authoritative legal sources without using AI. Then, if you choose to add one AI provider, support review compares the memo's claim with the actual opinion text and helps flag fabricated citations, misstatements, and unsupported claims.

Browser-stored keys· Pick one AI provider· Same product family as PermaDrop

CourtListener required

  1. Sign in at courtlistener.com.
  2. Open Settings - API and click Generate Token.
  3. Paste the token into cite.review setup gate or Settings.

Used for case verification and opinion retrieval. Free tier supports high hourly volume for normal drafting workflows.

Where Keys Are Stored

  • Keys are stored in your browser localStorage.
  • No key is saved in a cite.review account backend.
  • You can clear all keys anytime in your browser site data settings.

If you share a machine, use a dedicated browser profile for legal work.

AI is optional. To keep setup simple, choose only one provider. Most users should keep the recommended default model.
Anthropic Claudeoptional
  1. Go to platform.claude.com/settings/keys.
  2. Create a new API key.
  3. Copy the key and paste it in Settings - AI Provider.
  4. Recommended default: claude-haiku-4-5-20251001.

Recommended for most users: fast, strong, and simple.

OpenAIoptional
  1. Go to platform.openai.com/api-keys.
  2. Click Create new secret key.
  3. Copy the key and paste it in Settings - AI Provider.
  4. Recommended default: gpt-4.1-mini.

Simple choice for most users. Switch to a stronger model only if you want to trade speed and cost for more headroom.

Google Geminioptional
  1. Go to aistudio.google.com/app/apikey.
  2. Click Get API key and create a key.
  3. Copy the key and paste it in Settings - AI Provider.
  4. Recommended default: gemini-2.5-flash.

Gemini 2.0 Flash is deprecated. For this tool, keep 2.5 Flash unless you specifically want the stronger 2.5 Pro model.

Kimi (Moonshot)optional
  1. Go to platform.kimi.com/console/api-keys.
  2. Create an API key.
  3. Copy the key and paste it in Settings - AI Provider.
  4. Recommended default: kimi-k2.6.

Kimi's current platform highlights the K2 family. Keep K2.6 unless you specifically prefer K2.5.

Local (Ollama / LM Studio)optional
⚠️ Model quality matters enormously for legal work.
Small models that fit on a standard MacBook Air (≤16 GB RAM, 3B–7B parameters) perform very poorly at legal citation analysis — expect frequent hallucinations and missed errors. For reliable results you need ≥32B parameters, which require high-end workstation hardware. If you don't have that hardware, use a cloud provider instead.

Option A — LM Studio (recommended for Mac)

  1. Download LM Studio and open it.
  2. Click the search icon and download your model:
    8 GB RAM: qwen2.5-3b-instruct-mlx or llama-3.2-3b-instruct-mlx (~2 GB)
    16 GB RAM: gemma-4-e4b-it-mlx or qwen3-14b-mlx
    Download the MLX version (Apple Silicon only — 2–3× faster than Ollama)
  3. In the left sidebar, click Developer → toggle Enable local server on.
  4. Note the server address (usually http://127.0.0.1:1234/v1).
  5. In cite.review Settings: set Base URL to http://127.0.0.1:1234/v1, leave API Key blank.
  6. Set Model name to the exact identifier shown in LM Studio's server tab (e.g. qwen2.5-3b-instruct-mlx).

Option B — Ollama (Windows / Linux / Mac)

  1. Download Ollama and install it.
  2. Open Terminal and pull your model:
    8 GB RAM: ollama pull gemma4:e4b
    16 GB RAM: ollama pull qwen3:14b
  3. Ollama starts automatically — no API key needed.
  4. In cite.review Settings: set Base URL to http://localhost:11434/v1, leave API Key blank.
  5. Set Model name to the name you pulled (e.g. qwen3:14b).

Even with a large model, local inference is slower than cloud providers and results may vary. For production legal work, a cloud provider is more reliable.

Recommended baseline
CourtListener key + one cloud AI key is the simplest setup for most users.
Cost note
AI memo-support analysis can incur provider charges. Citation verification via CourtListener remains separate.