Introduction
The first deterministic AI engine for mission-critical logic.
CLM (Cognitive Logic Model) is a fundamental shift in how artificial intelligence processes reasoning. Unlike stochastic Large Language Models (LLMs) which predict the next likely token based on probability, CLM uses a hybrid neuro-symbolic architecture to verify logical consistency before generation.
Use Cases
- Legal Automation: Contract review where "almost correct" is unacceptable.
- Financial Modeling: High-frequency trading algorithms dependent on precise news sentiment.
- Code Synthesis: Generating unit-tested, secure executable code.
Installation
Integrate CLM into your stack in under 2 minutes.
Node.js
The CLM Node.js library is written in TypeScript and includes type definitions.
# Install via npm
npm install @clm/sdk
# Install via yarn
yarn add @clm/sdk
Python
Our Python client supports synchronous and asynchronous execution patterns.
# Install via pip
pip install clm-ai
Authentication
Securely accessing the API.
The CLM API uses API keys for authentication. You can manage your API keys in the dashboard. Your API keys carry many privileges, so be sure to keep them secure.
SDK Usage
import { Client } from '@clm/sdk';
const client = new Client({
apiKey: process.env.CLM_API_KEY
});
Model Overview
Three distinct architectures for every use case.
CLM currently offers three models. Each is optimized for specific trade-offs between latency, reasoning depth, and cost.
| Model ID | Context | Cost (Input/Output) | Best For |
|---|---|---|---|
| clm-1.5 | 32k | $1.00 / $2.00 | General purpose, Standard tasks |
| clm-2-flash | 128k | $0.15 / $0.60 | High speed, Real-time chat, Summarization |
| clm-2-pro | 200k | $5.00 / $15.00 | Deep reasoning, Complex coding, Math |
Detailed Breakdown
1. CLM 1.5 (Standard)
The balanced legacy model. Reliable for most standard applications that do not require sub-20ms latency or advanced multi-step reasoning.
2. CLM 2 Flash (Speed)
Built for extreme throughput. CLM 2 Flash is our most cost-effective model, designed for high-volume applications like customer support bots, real-time analytics, and simple data extraction.
3. CLM 2 Pro (Reasoning)
Our flagship "Thinking" model. CLM 2 Pro employs an internal chain-of-thought verification process before outputting tokens. It excels at writing complex software, legal analysis, and scientific research.
Context Window
Understanding memory limitations.
The context window represents the total amount of information (tokens) the model can retain in its "working memory" during a single request-response cycle.
- CLM 1.5: 32,768 tokens (~25k words)
- CLM 2 Flash: 128,000 tokens (~100k words)
- CLM 2 Pro: 200,000 tokens (~150k words)
Chat Completions
POST /v1/chat/completions
Creates a model response for the given chat conversation.
Parameters
- model Required
E.g., clm-2-pro or clm-2-flash. - messages Required
A list of messages comprising the conversation. - temperature Optional
0 to 2. Higher values mean more random outputs.
Example: Deep Reasoning (CLM 2 Pro)
const completion = await client.chat.completions.create({
model: "clm-2-pro",
messages: [
{ role: "system", content: "You are a senior engineer." },
{ role: "user", content: "Refactor this legacy codebase..." }
],
temperature: 0.2 // Lower temperature for precision
});
Example: High Speed (CLM 2 Flash)
const response = await client.chat.completions.create({
model: "clm-2-flash",
messages: [{ role: "user", content: "Summarize this email." }],
max_tokens: 500
});
Embeddings
POST /v1/embeddings
Get a vector representation of a given input. CLM embeddings are compatible with all major vector databases (Pinecone, Milvus, Chroma).
Example
const embedding = await client.embeddings.create({
model: "clm-embed-v1",
input: "The quick brown fox jumped over the lazy dog"
});
console.log(embedding.data[0].embedding);
// Output: [0.00230, -0.0032, ...]
Error Codes
Handling API exceptions.
| Code | Error Type | Description |
|---|---|---|
| 401 | AuthenticationError | Invalid API Key or expired token. |
| 429 | RateLimitError | You are sending requests too quickly. |
| 500 | ServerError | Issue on CLM servers. Retry request. |
Documentation Version 2.5.0