-
+
-
+
@@ -81,7 +83,7 @@ There are mainly 2 ways of installing Perplexica - With Docker, Without Docker.
Perplexica can be easily run using Docker. Simply run the following command:
```bash
-docker run -d -p 3000:3000 -v perplexica-data:/home/perplexica/data -v perplexica-uploads:/home/perplexica/uploads --name perplexica itzcrazykns1337/perplexica:latest
+docker run -d -p 3000:3000 -v perplexica-data:/home/perplexica/data --name perplexica itzcrazykns1337/perplexica:latest
```
This will pull and start the Perplexica container with the bundled SearxNG search engine. Once running, open your browser and navigate to http://localhost:3000. You can then configure your settings (API keys, models, etc.) directly in the setup screen.
@@ -93,7 +95,7 @@ This will pull and start the Perplexica container with the bundled SearxNG searc
If you already have SearxNG running, you can use the slim version of Perplexica:
```bash
-docker run -d -p 3000:3000 -e SEARXNG_API_URL=http://your-searxng-url:8080 -v perplexica-data:/home/perplexica/data -v perplexica-uploads:/home/perplexica/uploads --name perplexica itzcrazykns1337/perplexica:slim-latest
+docker run -d -p 3000:3000 -e SEARXNG_API_URL=http://your-searxng-url:8080 -v perplexica-data:/home/perplexica/data --name perplexica itzcrazykns1337/perplexica:slim-latest
```
**Important**: Make sure your SearxNG instance has:
@@ -120,7 +122,7 @@ If you prefer to build from source or need more control:
```bash
docker build -t perplexica .
- docker run -d -p 3000:3000 -v perplexica-data:/home/perplexica/data -v perplexica-uploads:/home/perplexica/uploads --name perplexica perplexica
+ docker run -d -p 3000:3000 -v perplexica-data:/home/perplexica/data --name perplexica perplexica
```
5. Access Perplexica at http://localhost:3000 and configure your settings in the setup screen.
@@ -237,13 +239,8 @@ Perplexica runs on Next.js and handles all API requests. It works right away on
## Upcoming Features
-- [x] Add settings page
-- [x] Adding support for local LLMs
-- [x] History Saving features
-- [x] Introducing various Focus Modes
-- [x] Adding API support
-- [x] Adding Discover
-- [ ] Finalizing Copilot Mode
+- [] Adding more widgets, integrations, search sources
+- [] Adding authentication
## Support Us
diff --git a/docker-compose.yaml b/docker-compose.yaml
index 50b6785..e2c245d 100644
--- a/docker-compose.yaml
+++ b/docker-compose.yaml
@@ -1,6 +1,8 @@
services:
perplexica:
image: itzcrazykns1337/perplexica:latest
+ build:
+ context: .
ports:
- '3000:3000'
volumes:
diff --git a/docs/API/SEARCH.md b/docs/API/SEARCH.md
index 04f11ef..0c35a81 100644
--- a/docs/API/SEARCH.md
+++ b/docs/API/SEARCH.md
@@ -57,7 +57,7 @@ Use the `id` field as the `providerId` and the `key` field from the models array
### Request
-The API accepts a JSON object in the request body, where you define the focus mode, chat models, embedding models, and your query.
+The API accepts a JSON object in the request body, where you define the enabled search `sources`, chat models, embedding models, and your query.
#### Request Body Structure
@@ -72,7 +72,7 @@ The API accepts a JSON object in the request body, where you define the focus mo
"key": "text-embedding-3-large"
},
"optimizationMode": "speed",
- "focusMode": "webSearch",
+ "sources": ["web"],
"query": "What is Perplexica",
"history": [
["human", "Hi, how are you?"],
@@ -87,24 +87,25 @@ The API accepts a JSON object in the request body, where you define the focus mo
### Request Parameters
-- **`chatModel`** (object, optional): Defines the chat model to be used for the query. To get available providers and models, send a GET request to `http://localhost:3000/api/providers`.
+- **`chatModel`** (object, required): Defines the chat model to be used for the query. To get available providers and models, send a GET request to `http://localhost:3000/api/providers`.
- `providerId` (string): The UUID of the provider. You can get this from the `/api/providers` endpoint response.
- `key` (string): The model key/identifier (e.g., `gpt-4o-mini`, `llama3.1:latest`). Use the `key` value from the provider's `chatModels` array, not the display name.
-- **`embeddingModel`** (object, optional): Defines the embedding model for similarity-based searching. To get available providers and models, send a GET request to `http://localhost:3000/api/providers`.
+- **`embeddingModel`** (object, required): Defines the embedding model for similarity-based searching. To get available providers and models, send a GET request to `http://localhost:3000/api/providers`.
- `providerId` (string): The UUID of the embedding provider. You can get this from the `/api/providers` endpoint response.
- `key` (string): The embedding model key (e.g., `text-embedding-3-large`, `nomic-embed-text`). Use the `key` value from the provider's `embeddingModels` array, not the display name.
-- **`focusMode`** (string, required): Specifies which focus mode to use. Available modes:
+- **`sources`** (array, required): Which search sources to enable. Available values:
- - `webSearch`, `academicSearch`, `writingAssistant`, `wolframAlphaSearch`, `youtubeSearch`, `redditSearch`.
+ - `web`, `academic`, `discussions`.
- **`optimizationMode`** (string, optional): Specifies the optimization mode to control the balance between performance and quality. Available modes:
- `speed`: Prioritize speed and return the fastest answer.
- `balanced`: Provide a balanced answer with good speed and reasonable quality.
+ - `quality`: Prioritize answer quality (may be slower).
- **`query`** (string, required): The search query or question.
@@ -132,14 +133,14 @@ The response from the API includes both the final message and the sources used t
"message": "Perplexica is an innovative, open-source AI-powered search engine designed to enhance the way users search for information online. Here are some key features and characteristics of Perplexica:\n\n- **AI-Powered Technology**: It utilizes advanced machine learning algorithms to not only retrieve information but also to understand the context and intent behind user queries, providing more relevant results [1][5].\n\n- **Open-Source**: Being open-source, Perplexica offers flexibility and transparency, allowing users to explore its functionalities without the constraints of proprietary software [3][10].",
"sources": [
{
- "pageContent": "Perplexica is an innovative, open-source AI-powered search engine designed to enhance the way users search for information online.",
+ "content": "Perplexica is an innovative, open-source AI-powered search engine designed to enhance the way users search for information online.",
"metadata": {
"title": "What is Perplexica, and how does it function as an AI-powered search ...",
"url": "https://askai.glarity.app/search/What-is-Perplexica--and-how-does-it-function-as-an-AI-powered-search-engine"
}
},
{
- "pageContent": "Perplexica is an open-source AI-powered search tool that dives deep into the internet to find precise answers.",
+ "content": "Perplexica is an open-source AI-powered search tool that dives deep into the internet to find precise answers.",
"metadata": {
"title": "Sahar Mor's Post",
"url": "https://www.linkedin.com/posts/sahar-mor_a-new-open-source-project-called-perplexica-activity-7204489745668694016-ncja"
@@ -158,7 +159,7 @@ Example of streamed response objects:
```
{"type":"init","data":"Stream connected"}
-{"type":"sources","data":[{"pageContent":"...","metadata":{"title":"...","url":"..."}},...]}
+{"type":"sources","data":[{"content":"...","metadata":{"title":"...","url":"..."}},...]}
{"type":"response","data":"Perplexica is an "}
{"type":"response","data":"innovative, open-source "}
{"type":"response","data":"AI-powered search engine..."}
@@ -174,9 +175,9 @@ Clients should process each line as a separate JSON object. The different messag
### Fields in the Response
-- **`message`** (string): The search result, generated based on the query and focus mode.
+- **`message`** (string): The search result, generated based on the query and enabled `sources`.
- **`sources`** (array): A list of sources that were used to generate the search result. Each source includes:
- - `pageContent`: A snippet of the relevant content from the source.
+ - `content`: A snippet of the relevant content from the source.
- `metadata`: Metadata about the source, including:
- `title`: The title of the webpage.
- `url`: The URL of the webpage.
@@ -185,5 +186,5 @@ Clients should process each line as a separate JSON object. The different messag
If an error occurs during the search process, the API will return an appropriate error message with an HTTP status code.
-- **400**: If the request is malformed or missing required fields (e.g., no focus mode or query).
+- **400**: If the request is malformed or missing required fields (e.g., no `sources` or `query`).
- **500**: If an internal server error occurs during the search.
diff --git a/docs/architecture/README.md b/docs/architecture/README.md
index 5732471..5593b37 100644
--- a/docs/architecture/README.md
+++ b/docs/architecture/README.md
@@ -1,11 +1,38 @@
-# Perplexica's Architecture
+# Perplexica Architecture
-Perplexica's architecture consists of the following key components:
+Perplexica is a Next.js application that combines an AI chat experience with search.
-1. **User Interface**: A web-based interface that allows users to interact with Perplexica for searching images, videos, and much more.
-2. **Agent/Chains**: These components predict Perplexica's next actions, understand user queries, and decide whether a web search is necessary.
-3. **SearXNG**: A metadata search engine used by Perplexica to search the web for sources.
-4. **LLMs (Large Language Models)**: Utilized by agents and chains for tasks like understanding content, writing responses, and citing sources. Examples include Claude, GPTs, etc.
-5. **Embedding Models**: To improve the accuracy of search results, embedding models re-rank the results using similarity search algorithms such as cosine similarity and dot product distance.
+For a high level flow, see [WORKING.md](WORKING.md). For deeper implementation details, see [CONTRIBUTING.md](../../CONTRIBUTING.md).
-For a more detailed explanation of how these components work together, see [WORKING.md](https://github.com/ItzCrazyKns/Perplexica/tree/master/docs/architecture/WORKING.md).
+## Key components
+
+1. **User Interface**
+
+ - A web based UI that lets users chat, search, and view citations.
+
+2. **API Routes**
+
+ - `POST /api/chat` powers the chat UI.
+ - `POST /api/search` provides a programmatic search endpoint.
+ - `GET /api/providers` lists available providers and model keys.
+
+3. **Agents and Orchestration**
+
+ - The system classifies the question first.
+ - It can run research and widgets in parallel.
+ - It generates the final answer and includes citations.
+
+4. **Search Backend**
+
+ - A meta search backend is used to fetch relevant web results when research is enabled.
+
+5. **LLMs (Large Language Models)**
+
+ - Used for classification, writing answers, and producing citations.
+
+6. **Embedding Models**
+
+ - Used for semantic search over user uploaded files.
+
+7. **Storage**
+ - Chats and messages are stored so conversations can be reloaded.
diff --git a/docs/architecture/WORKING.md b/docs/architecture/WORKING.md
index 6bad4f9..af29b90 100644
--- a/docs/architecture/WORKING.md
+++ b/docs/architecture/WORKING.md
@@ -1,19 +1,72 @@
-# How does Perplexica work?
+# How Perplexica Works
-Curious about how Perplexica works? Don't worry, we'll cover it here. Before we begin, make sure you've read about the architecture of Perplexica to ensure you understand what it's made up of. Haven't read it? You can read it [here](https://github.com/ItzCrazyKns/Perplexica/tree/master/docs/architecture/README.md).
+This is a high level overview of how Perplexica answers a question.
-We'll understand how Perplexica works by taking an example of a scenario where a user asks: "How does an A.C. work?". We'll break down the process into steps to make it easier to understand. The steps are as follows:
+If you want a component level overview, see [README.md](README.md).
-1. The message is sent to the `/api/chat` route where it invokes the chain. The chain will depend on your focus mode. For this example, let's assume we use the "webSearch" focus mode.
-2. The chain is now invoked; first, the message is passed to another chain where it first predicts (using the chat history and the question) whether there is a need for sources and searching the web. If there is, it will generate a query (in accordance with the chat history) for searching the web that we'll take up later. If not, the chain will end there, and then the answer generator chain, also known as the response generator, will be started.
-3. The query returned by the first chain is passed to SearXNG to search the web for information.
-4. After the information is retrieved, it is based on keyword-based search. We then convert the information into embeddings and the query as well, then we perform a similarity search to find the most relevant sources to answer the query.
-5. After all this is done, the sources are passed to the response generator. This chain takes all the chat history, the query, and the sources. It generates a response that is streamed to the UI.
+If you want implementation details, see [CONTRIBUTING.md](../../CONTRIBUTING.md).
-## How are the answers cited?
+## What happens when you ask a question
-The LLMs are prompted to do so. We've prompted them so well that they cite the answers themselves, and using some UI magic, we display it to the user.
+When you send a message in the UI, the app calls `POST /api/chat`.
-## Image and Video Search
+At a high level, we do three things:
-Image and video searches are conducted in a similar manner. A query is always generated first, then we search the web for images and videos that match the query. These results are then returned to the user.
+1. Classify the question and decide what to do next.
+2. Run research and widgets in parallel.
+3. Write the final answer and include citations.
+
+## Classification
+
+Before searching or answering, we run a classification step.
+
+This step decides things like:
+
+- Whether we should do research for this question
+- Whether we should show any widgets
+- How to rewrite the question into a clearer standalone form
+
+## Widgets
+
+Widgets are small, structured helpers that can run alongside research.
+
+Examples include weather, stocks, and simple calculations.
+
+If a widget is relevant, we show it in the UI while the answer is still being generated.
+
+Widgets are helpful context for the answer, but they are not part of what the model should cite.
+
+## Research
+
+If research is needed, we gather information in the background while widgets can run.
+
+Depending on configuration, research may include web lookup and searching user uploaded files.
+
+## Answer generation
+
+Once we have enough context, the chat model generates the final response.
+
+You can control the tradeoff between speed and quality using `optimizationMode`:
+
+- `speed`
+- `balanced`
+- `quality`
+
+## How citations work
+
+We prompt the model to cite the references it used. The UI then renders those citations alongside the supporting links.
+
+## Search API
+
+If you are integrating Perplexica into another product, you can call `POST /api/search`.
+
+It returns:
+
+- `message`: the generated answer
+- `sources`: supporting references used for the answer
+
+You can also enable streaming by setting `stream: true`.
+
+## Image and video search
+
+Image and video search use separate endpoints (`POST /api/images` and `POST /api/videos`). We generate a focused query using the chat model, then fetch matching results from a search backend.
diff --git a/docs/installation/UPDATING.md b/docs/installation/UPDATING.md
index 0603671..4f2be75 100644
--- a/docs/installation/UPDATING.md
+++ b/docs/installation/UPDATING.md
@@ -10,7 +10,7 @@ Simply pull the latest image and restart your container:
docker pull itzcrazykns1337/perplexica:latest
docker stop perplexica
docker rm perplexica
-docker run -d -p 3000:3000 -v perplexica-data:/home/perplexica/data -v perplexica-uploads:/home/perplexica/uploads --name perplexica itzcrazykns1337/perplexica:latest
+docker run -d -p 3000:3000 -v perplexica-data:/home/perplexica/data --name perplexica itzcrazykns1337/perplexica:latest
```
For slim version:
@@ -19,7 +19,7 @@ For slim version:
docker pull itzcrazykns1337/perplexica:slim-latest
docker stop perplexica
docker rm perplexica
-docker run -d -p 3000:3000 -e SEARXNG_API_URL=http://your-searxng-url:8080 -v perplexica-data:/home/perplexica/data -v perplexica-uploads:/home/perplexica/uploads --name perplexica itzcrazykns1337/perplexica:slim-latest
+docker run -d -p 3000:3000 -e SEARXNG_API_URL=http://your-searxng-url:8080 -v perplexica-data:/home/perplexica/data --name perplexica itzcrazykns1337/perplexica:slim-latest
```
Once updated, go to http://localhost:3000 and verify the latest changes. Your settings are preserved automatically.
diff --git a/drizzle/0002_daffy_wrecker.sql b/drizzle/0002_daffy_wrecker.sql
new file mode 100644
index 0000000..1520a65
--- /dev/null
+++ b/drizzle/0002_daffy_wrecker.sql
@@ -0,0 +1 @@
+/* do nothing */
\ No newline at end of file
diff --git a/drizzle/meta/0002_snapshot.json b/drizzle/meta/0002_snapshot.json
new file mode 100644
index 0000000..feb820c
--- /dev/null
+++ b/drizzle/meta/0002_snapshot.json
@@ -0,0 +1,132 @@
+{
+ "version": "6",
+ "dialect": "sqlite",
+ "id": "1c5eb804-d6b4-48ec-9a8f-75fb729c8e52",
+ "prevId": "6dedf55f-0e44-478f-82cf-14a21ac686f8",
+ "tables": {
+ "chats": {
+ "name": "chats",
+ "columns": {
+ "id": {
+ "name": "id",
+ "type": "text",
+ "primaryKey": true,
+ "notNull": true,
+ "autoincrement": false
+ },
+ "title": {
+ "name": "title",
+ "type": "text",
+ "primaryKey": false,
+ "notNull": true,
+ "autoincrement": false
+ },
+ "createdAt": {
+ "name": "createdAt",
+ "type": "text",
+ "primaryKey": false,
+ "notNull": true,
+ "autoincrement": false
+ },
+ "sources": {
+ "name": "sources",
+ "type": "text",
+ "primaryKey": false,
+ "notNull": true,
+ "autoincrement": false
+ },
+ "files": {
+ "name": "files",
+ "type": "text",
+ "primaryKey": false,
+ "notNull": false,
+ "autoincrement": false,
+ "default": "'[]'"
+ }
+ },
+ "indexes": {},
+ "foreignKeys": {},
+ "compositePrimaryKeys": {},
+ "uniqueConstraints": {},
+ "checkConstraints": {}
+ },
+ "messages": {
+ "name": "messages",
+ "columns": {
+ "id": {
+ "name": "id",
+ "type": "integer",
+ "primaryKey": true,
+ "notNull": true,
+ "autoincrement": false
+ },
+ "messageId": {
+ "name": "messageId",
+ "type": "text",
+ "primaryKey": false,
+ "notNull": true,
+ "autoincrement": false
+ },
+ "chatId": {
+ "name": "chatId",
+ "type": "text",
+ "primaryKey": false,
+ "notNull": true,
+ "autoincrement": false
+ },
+ "backendId": {
+ "name": "backendId",
+ "type": "text",
+ "primaryKey": false,
+ "notNull": true,
+ "autoincrement": false
+ },
+ "query": {
+ "name": "query",
+ "type": "text",
+ "primaryKey": false,
+ "notNull": true,
+ "autoincrement": false
+ },
+ "createdAt": {
+ "name": "createdAt",
+ "type": "text",
+ "primaryKey": false,
+ "notNull": true,
+ "autoincrement": false
+ },
+ "responseBlocks": {
+ "name": "responseBlocks",
+ "type": "text",
+ "primaryKey": false,
+ "notNull": false,
+ "autoincrement": false,
+ "default": "'[]'"
+ },
+ "status": {
+ "name": "status",
+ "type": "text",
+ "primaryKey": false,
+ "notNull": false,
+ "autoincrement": false,
+ "default": "'answering'"
+ }
+ },
+ "indexes": {},
+ "foreignKeys": {},
+ "compositePrimaryKeys": {},
+ "uniqueConstraints": {},
+ "checkConstraints": {}
+ }
+ },
+ "views": {},
+ "enums": {},
+ "_meta": {
+ "schemas": {},
+ "tables": {},
+ "columns": {}
+ },
+ "internal": {
+ "indexes": {}
+ }
+}
diff --git a/drizzle/meta/_journal.json b/drizzle/meta/_journal.json
index cf1610b..c271ddc 100644
--- a/drizzle/meta/_journal.json
+++ b/drizzle/meta/_journal.json
@@ -15,6 +15,13 @@
"when": 1758863991284,
"tag": "0001_wise_rockslide",
"breakpoints": true
+ },
+ {
+ "idx": 2,
+ "version": "6",
+ "when": 1763732708332,
+ "tag": "0002_daffy_wrecker",
+ "breakpoints": true
}
]
}
diff --git a/next-env.d.ts b/next-env.d.ts
index 1b3be08..c4b7818 100644
--- a/next-env.d.ts
+++ b/next-env.d.ts
@@ -1,5 +1,6 @@
///
///
+import "./.next/dev/types/routes.d.ts";
// NOTE: This file should not be edited
// see https://nextjs.org/docs/app/api-reference/config/typescript for more information.
diff --git a/next.config.mjs b/next.config.mjs
index 2300ff4..5770f76 100644
--- a/next.config.mjs
+++ b/next.config.mjs
@@ -1,3 +1,5 @@
+import pkg from './package.json' with { type: 'json' };
+
/** @type {import('next').NextConfig} */
const nextConfig = {
output: 'standalone',
@@ -9,6 +11,9 @@ const nextConfig = {
],
},
serverExternalPackages: ['pdf-parse'],
+ env: {
+ NEXT_PUBLIC_VERSION: pkg.version,
+ },
};
export default nextConfig;
diff --git a/package.json b/package.json
index 7083b66..1040261 100644
--- a/package.json
+++ b/package.json
@@ -11,53 +11,55 @@
"format:write": "prettier . --write"
},
"dependencies": {
+ "@google/genai": "^1.34.0",
"@headlessui/react": "^2.2.0",
"@headlessui/tailwindcss": "^0.2.2",
- "@huggingface/transformers": "^3.7.5",
- "@iarna/toml": "^2.2.5",
+ "@huggingface/transformers": "^3.8.1",
"@icons-pack/react-simple-icons": "^12.3.0",
- "@langchain/anthropic": "^1.0.0",
- "@langchain/community": "^1.0.0",
- "@langchain/core": "^1.0.1",
- "@langchain/google-genai": "^1.0.0",
- "@langchain/groq": "^1.0.0",
- "@langchain/ollama": "^1.0.0",
- "@langchain/openai": "^1.0.0",
- "@langchain/textsplitters": "^1.0.0",
+ "@phosphor-icons/react": "^2.1.10",
+ "@radix-ui/react-tooltip": "^1.2.8",
"@tailwindcss/typography": "^0.5.12",
+ "@types/jspdf": "^2.0.0",
"axios": "^1.8.3",
"better-sqlite3": "^11.9.1",
"clsx": "^2.1.0",
- "compute-cosine-similarity": "^1.1.0",
"drizzle-orm": "^0.40.1",
- "framer-motion": "^12.23.24",
- "html-to-text": "^9.0.5",
- "jspdf": "^3.0.1",
- "langchain": "^1.0.1",
- "lucide-react": "^0.363.0",
+ "js-tiktoken": "^1.0.21",
+ "jspdf": "^3.0.4",
+ "lightweight-charts": "^5.0.9",
+ "lucide-react": "^0.556.0",
"mammoth": "^1.9.1",
"markdown-to-jsx": "^7.7.2",
- "next": "^15.2.2",
+ "mathjs": "^15.1.0",
+ "motion": "^12.23.26",
+ "next": "^16.0.7",
"next-themes": "^0.3.0",
- "pdf-parse": "^1.1.1",
+ "officeparser": "^5.2.2",
+ "ollama": "^0.6.3",
+ "openai": "^6.9.0",
+ "partial-json": "^0.1.7",
+ "pdf-parse": "^2.4.5",
"react": "^18",
"react-dom": "^18",
+ "react-syntax-highlighter": "^16.1.0",
"react-text-to-speech": "^0.14.5",
"react-textarea-autosize": "^8.5.3",
+ "rfc6902": "^5.1.2",
"sonner": "^1.4.41",
"tailwind-merge": "^2.2.2",
- "winston": "^3.17.0",
+ "turndown": "^7.2.2",
+ "yahoo-finance2": "^3.10.2",
"yet-another-react-lightbox": "^3.17.2",
- "zod": "^3.22.4"
+ "zod": "^4.1.12"
},
"devDependencies": {
"@types/better-sqlite3": "^7.6.12",
- "@types/html-to-text": "^9.0.4",
- "@types/jspdf": "^2.0.0",
"@types/node": "^24.8.1",
"@types/pdf-parse": "^1.1.4",
"@types/react": "^18",
"@types/react-dom": "^18",
+ "@types/react-syntax-highlighter": "^15.5.13",
+ "@types/turndown": "^5.0.6",
"autoprefixer": "^10.0.1",
"drizzle-kit": "^0.30.5",
"eslint": "^8",
diff --git a/src/app/api/chat/route.ts b/src/app/api/chat/route.ts
index 25b8104..6362ebc 100644
--- a/src/app/api/chat/route.ts
+++ b/src/app/api/chat/route.ts
@@ -1,14 +1,14 @@
-import crypto from 'crypto';
-import { AIMessage, BaseMessage, HumanMessage } from '@langchain/core/messages';
-import { EventEmitter } from 'stream';
-import db from '@/lib/db';
-import { chats, messages as messagesSchema } from '@/lib/db/schema';
-import { and, eq, gt } from 'drizzle-orm';
-import { getFileDetails } from '@/lib/utils/files';
-import { searchHandlers } from '@/lib/search';
import { z } from 'zod';
import ModelRegistry from '@/lib/models/registry';
import { ModelWithProvider } from '@/lib/models/types';
+import SearchAgent from '@/lib/agents/search';
+import SessionManager from '@/lib/session';
+import { ChatTurnMessage } from '@/lib/types';
+import { SearchSources } from '@/lib/agents/search/types';
+import db from '@/lib/db';
+import { eq } from 'drizzle-orm';
+import { chats } from '@/lib/db/schema';
+import UploadManager from '@/lib/uploads/manager';
export const runtime = 'nodejs';
export const dynamic = 'force-dynamic';
@@ -20,47 +20,25 @@ const messageSchema = z.object({
});
const chatModelSchema: z.ZodType = z.object({
- providerId: z.string({
- errorMap: () => ({
- message: 'Chat model provider id must be provided',
- }),
- }),
- key: z.string({
- errorMap: () => ({
- message: 'Chat model key must be provided',
- }),
- }),
+ providerId: z.string({ message: 'Chat model provider id must be provided' }),
+ key: z.string({ message: 'Chat model key must be provided' }),
});
const embeddingModelSchema: z.ZodType = z.object({
providerId: z.string({
- errorMap: () => ({
- message: 'Embedding model provider id must be provided',
- }),
- }),
- key: z.string({
- errorMap: () => ({
- message: 'Embedding model key must be provided',
- }),
+ message: 'Embedding model provider id must be provided',
}),
+ key: z.string({ message: 'Embedding model key must be provided' }),
});
const bodySchema = z.object({
message: messageSchema,
optimizationMode: z.enum(['speed', 'balanced', 'quality'], {
- errorMap: () => ({
- message: 'Optimization mode must be one of: speed, balanced, quality',
- }),
+ message: 'Optimization mode must be one of: speed, balanced, quality',
}),
- focusMode: z.string().min(1, 'Focus mode is required'),
+ sources: z.array(z.string()).optional().default([]),
history: z
- .array(
- z.tuple([z.string(), z.string()], {
- errorMap: () => ({
- message: 'History items must be tuples of two strings',
- }),
- }),
- )
+ .array(z.tuple([z.string(), z.string()]))
.optional()
.default([]),
files: z.array(z.string()).optional().default([]),
@@ -69,7 +47,6 @@ const bodySchema = z.object({
systemInstructions: z.string().nullable().optional().default(''),
});
-type Message = z.infer;
type Body = z.infer;
const safeValidateBody = (data: unknown) => {
@@ -78,7 +55,7 @@ const safeValidateBody = (data: unknown) => {
if (!result.success) {
return {
success: false,
- error: result.error.errors.map((e) => ({
+ error: result.error.issues.map((e: any) => ({
path: e.path.join('.'),
message: e.message,
})),
@@ -91,143 +68,35 @@ const safeValidateBody = (data: unknown) => {
};
};
-const handleEmitterEvents = async (
- stream: EventEmitter,
- writer: WritableStreamDefaultWriter,
- encoder: TextEncoder,
- chatId: string,
-) => {
- let receivedMessage = '';
- const aiMessageId = crypto.randomBytes(7).toString('hex');
-
- stream.on('data', (data) => {
- const parsedData = JSON.parse(data);
- if (parsedData.type === 'response') {
- writer.write(
- encoder.encode(
- JSON.stringify({
- type: 'message',
- data: parsedData.data,
- messageId: aiMessageId,
- }) + '\n',
- ),
- );
-
- receivedMessage += parsedData.data;
- } else if (parsedData.type === 'sources') {
- writer.write(
- encoder.encode(
- JSON.stringify({
- type: 'sources',
- data: parsedData.data,
- messageId: aiMessageId,
- }) + '\n',
- ),
- );
-
- const sourceMessageId = crypto.randomBytes(7).toString('hex');
-
- db.insert(messagesSchema)
- .values({
- chatId: chatId,
- messageId: sourceMessageId,
- role: 'source',
- sources: parsedData.data,
- createdAt: new Date().toString(),
- })
- .execute();
- }
- });
- stream.on('end', () => {
- writer.write(
- encoder.encode(
- JSON.stringify({
- type: 'messageEnd',
- }) + '\n',
- ),
- );
- writer.close();
-
- db.insert(messagesSchema)
- .values({
- content: receivedMessage,
- chatId: chatId,
- messageId: aiMessageId,
- role: 'assistant',
- createdAt: new Date().toString(),
+const ensureChatExists = async (input: {
+ id: string;
+ sources: SearchSources[];
+ query: string;
+ fileIds: string[];
+}) => {
+ try {
+ const exists = await db.query.chats
+ .findFirst({
+ where: eq(chats.id, input.id),
})
.execute();
- });
- stream.on('error', (data) => {
- const parsedData = JSON.parse(data);
- writer.write(
- encoder.encode(
- JSON.stringify({
- type: 'error',
- data: parsedData.data,
+
+ if (!exists) {
+ await db.insert(chats).values({
+ id: input.id,
+ createdAt: new Date().toISOString(),
+ sources: input.sources,
+ title: input.query,
+ files: input.fileIds.map((id) => {
+ return {
+ fileId: id,
+ name: UploadManager.getFile(id)?.name || 'Uploaded File',
+ };
}),
- ),
- );
- writer.close();
- });
-};
-
-const handleHistorySave = async (
- message: Message,
- humanMessageId: string,
- focusMode: string,
- files: string[],
-) => {
- const chat = await db.query.chats.findFirst({
- where: eq(chats.id, message.chatId),
- });
-
- const fileData = files.map(getFileDetails);
-
- if (!chat) {
- await db
- .insert(chats)
- .values({
- id: message.chatId,
- title: message.content,
- createdAt: new Date().toString(),
- focusMode: focusMode,
- files: fileData,
- })
- .execute();
- } else if (JSON.stringify(chat.files ?? []) != JSON.stringify(fileData)) {
- db.update(chats)
- .set({
- files: files.map(getFileDetails),
- })
- .where(eq(chats.id, message.chatId));
- }
-
- const messageExists = await db.query.messages.findFirst({
- where: eq(messagesSchema.messageId, humanMessageId),
- });
-
- if (!messageExists) {
- await db
- .insert(messagesSchema)
- .values({
- content: message.content,
- chatId: message.chatId,
- messageId: humanMessageId,
- role: 'user',
- createdAt: new Date().toString(),
- })
- .execute();
- } else {
- await db
- .delete(messagesSchema)
- .where(
- and(
- gt(messagesSchema.id, messageExists.id),
- eq(messagesSchema.chatId, message.chatId),
- ),
- )
- .execute();
+ });
+ }
+ } catch (err) {
+ console.error('Failed to check/save chat:', err);
}
};
@@ -236,6 +105,7 @@ export const POST = async (req: Request) => {
const reqBody = (await req.json()) as Body;
const parseBody = safeValidateBody(reqBody);
+
if (!parseBody.success) {
return Response.json(
{ message: 'Invalid request body', error: parseBody.error },
@@ -265,48 +135,107 @@ export const POST = async (req: Request) => {
),
]);
- const humanMessageId =
- message.messageId ?? crypto.randomBytes(7).toString('hex');
-
- const history: BaseMessage[] = body.history.map((msg) => {
+ const history: ChatTurnMessage[] = body.history.map((msg) => {
if (msg[0] === 'human') {
- return new HumanMessage({
+ return {
+ role: 'user',
content: msg[1],
- });
+ };
} else {
- return new AIMessage({
+ return {
+ role: 'assistant',
content: msg[1],
- });
+ };
}
});
- const handler = searchHandlers[body.focusMode];
-
- if (!handler) {
- return Response.json(
- {
- message: 'Invalid focus mode',
- },
- { status: 400 },
- );
- }
-
- const stream = await handler.searchAndAnswer(
- message.content,
- history,
- llm,
- embedding,
- body.optimizationMode,
- body.files,
- body.systemInstructions as string,
- );
+ const agent = new SearchAgent();
+ const session = SessionManager.createSession();
const responseStream = new TransformStream();
const writer = responseStream.writable.getWriter();
const encoder = new TextEncoder();
- handleEmitterEvents(stream, writer, encoder, message.chatId);
- handleHistorySave(message, humanMessageId, body.focusMode, body.files);
+ const disconnect = session.subscribe((event: string, data: any) => {
+ if (event === 'data') {
+ if (data.type === 'block') {
+ writer.write(
+ encoder.encode(
+ JSON.stringify({
+ type: 'block',
+ block: data.block,
+ }) + '\n',
+ ),
+ );
+ } else if (data.type === 'updateBlock') {
+ writer.write(
+ encoder.encode(
+ JSON.stringify({
+ type: 'updateBlock',
+ blockId: data.blockId,
+ patch: data.patch,
+ }) + '\n',
+ ),
+ );
+ } else if (data.type === 'researchComplete') {
+ writer.write(
+ encoder.encode(
+ JSON.stringify({
+ type: 'researchComplete',
+ }) + '\n',
+ ),
+ );
+ }
+ } else if (event === 'end') {
+ writer.write(
+ encoder.encode(
+ JSON.stringify({
+ type: 'messageEnd',
+ }) + '\n',
+ ),
+ );
+ writer.close();
+ session.removeAllListeners();
+ } else if (event === 'error') {
+ writer.write(
+ encoder.encode(
+ JSON.stringify({
+ type: 'error',
+ data: data.data,
+ }) + '\n',
+ ),
+ );
+ writer.close();
+ session.removeAllListeners();
+ }
+ });
+
+ agent.searchAsync(session, {
+ chatHistory: history,
+ followUp: message.content,
+ chatId: body.message.chatId,
+ messageId: body.message.messageId,
+ config: {
+ llm,
+ embedding: embedding,
+ sources: body.sources as SearchSources[],
+ mode: body.optimizationMode,
+ fileIds: body.files,
+ systemInstructions: body.systemInstructions || 'None',
+ },
+ });
+
+ ensureChatExists({
+ id: body.message.chatId,
+ sources: body.sources as SearchSources[],
+ fileIds: body.files,
+ query: body.message.content,
+ });
+
+ req.signal.addEventListener('abort', () => {
+ disconnect();
+ writer.close();
+ });
return new Response(responseStream.readable, {
headers: {
diff --git a/src/app/api/images/route.ts b/src/app/api/images/route.ts
index d3416ca..9cfabb2 100644
--- a/src/app/api/images/route.ts
+++ b/src/app/api/images/route.ts
@@ -1,7 +1,6 @@
-import handleImageSearch from '@/lib/chains/imageSearchAgent';
+import searchImages from '@/lib/agents/media/image';
import ModelRegistry from '@/lib/models/registry';
import { ModelWithProvider } from '@/lib/models/types';
-import { AIMessage, BaseMessage, HumanMessage } from '@langchain/core/messages';
interface ImageSearchBody {
query: string;
@@ -13,16 +12,6 @@ export const POST = async (req: Request) => {
try {
const body: ImageSearchBody = await req.json();
- const chatHistory = body.chatHistory
- .map((msg: any) => {
- if (msg.role === 'user') {
- return new HumanMessage(msg.content);
- } else if (msg.role === 'assistant') {
- return new AIMessage(msg.content);
- }
- })
- .filter((msg) => msg !== undefined) as BaseMessage[];
-
const registry = new ModelRegistry();
const llm = await registry.loadChatModel(
@@ -30,9 +19,9 @@ export const POST = async (req: Request) => {
body.chatModel.key,
);
- const images = await handleImageSearch(
+ const images = await searchImages(
{
- chat_history: chatHistory,
+ chatHistory: body.chatHistory,
query: body.query,
},
llm,
diff --git a/src/app/api/reconnect/[id]/route.ts b/src/app/api/reconnect/[id]/route.ts
new file mode 100644
index 0000000..08be11b
--- /dev/null
+++ b/src/app/api/reconnect/[id]/route.ts
@@ -0,0 +1,93 @@
+import SessionManager from '@/lib/session';
+
+export const POST = async (
+ req: Request,
+ { params }: { params: Promise<{ id: string }> },
+) => {
+ try {
+ const { id } = await params;
+
+ const session = SessionManager.getSession(id);
+
+ if (!session) {
+ return Response.json({ message: 'Session not found' }, { status: 404 });
+ }
+
+ const responseStream = new TransformStream();
+ const writer = responseStream.writable.getWriter();
+ const encoder = new TextEncoder();
+
+ const disconnect = session.subscribe((event, data) => {
+ if (event === 'data') {
+ if (data.type === 'block') {
+ writer.write(
+ encoder.encode(
+ JSON.stringify({
+ type: 'block',
+ block: data.block,
+ }) + '\n',
+ ),
+ );
+ } else if (data.type === 'updateBlock') {
+ writer.write(
+ encoder.encode(
+ JSON.stringify({
+ type: 'updateBlock',
+ blockId: data.blockId,
+ patch: data.patch,
+ }) + '\n',
+ ),
+ );
+ } else if (data.type === 'researchComplete') {
+ writer.write(
+ encoder.encode(
+ JSON.stringify({
+ type: 'researchComplete',
+ }) + '\n',
+ ),
+ );
+ }
+ } else if (event === 'end') {
+ writer.write(
+ encoder.encode(
+ JSON.stringify({
+ type: 'messageEnd',
+ }) + '\n',
+ ),
+ );
+ writer.close();
+ disconnect();
+ } else if (event === 'error') {
+ writer.write(
+ encoder.encode(
+ JSON.stringify({
+ type: 'error',
+ data: data.data,
+ }) + '\n',
+ ),
+ );
+ writer.close();
+ disconnect();
+ }
+ });
+
+ req.signal.addEventListener('abort', () => {
+ disconnect();
+ writer.close();
+ });
+
+ return new Response(responseStream.readable, {
+ headers: {
+ 'Content-Type': 'text/event-stream',
+ Connection: 'keep-alive',
+ 'Cache-Control': 'no-cache, no-transform',
+ },
+ });
+ } catch (err) {
+ console.error('Error in reconnecting to session stream: ', err);
+ return Response.json(
+ { message: 'An error has occurred.' },
+ { status: 500 },
+ );
+ }
+};
diff --git a/src/app/api/search/route.ts b/src/app/api/search/route.ts
index bc7255f..0991268 100644
--- a/src/app/api/search/route.ts
+++ b/src/app/api/search/route.ts
@@ -1,12 +1,13 @@
-import { AIMessage, BaseMessage, HumanMessage } from '@langchain/core/messages';
-import { MetaSearchAgentType } from '@/lib/search/metaSearchAgent';
-import { searchHandlers } from '@/lib/search';
import ModelRegistry from '@/lib/models/registry';
import { ModelWithProvider } from '@/lib/models/types';
+import SessionManager from '@/lib/session';
+import { ChatTurnMessage } from '@/lib/types';
+import { SearchSources } from '@/lib/agents/search/types';
+import APISearchAgent from '@/lib/agents/search/api';
interface ChatRequestBody {
- optimizationMode: 'speed' | 'balanced';
- focusMode: string;
+ optimizationMode: 'speed' | 'balanced' | 'quality';
+ sources: SearchSources[];
chatModel: ModelWithProvider;
embeddingModel: ModelWithProvider;
query: string;
@@ -19,23 +20,17 @@ export const POST = async (req: Request) => {
try {
const body: ChatRequestBody = await req.json();
- if (!body.focusMode || !body.query) {
+ if (!body.sources || !body.query) {
return Response.json(
- { message: 'Missing focus mode or query' },
+ { message: 'Missing sources or query' },
{ status: 400 },
);
}
body.history = body.history || [];
- body.optimizationMode = body.optimizationMode || 'balanced';
+ body.optimizationMode = body.optimizationMode || 'speed';
body.stream = body.stream || false;
- const history: BaseMessage[] = body.history.map((msg) => {
- return msg[0] === 'human'
- ? new HumanMessage({ content: msg[1] })
- : new AIMessage({ content: msg[1] });
- });
-
const registry = new ModelRegistry();
const [llm, embeddings] = await Promise.all([
@@ -46,21 +41,30 @@ export const POST = async (req: Request) => {
),
]);
- const searchHandler: MetaSearchAgentType = searchHandlers[body.focusMode];
+ const history: ChatTurnMessage[] = body.history.map((msg) => {
+ return msg[0] === 'human'
+ ? { role: 'user', content: msg[1] }
+ : { role: 'assistant', content: msg[1] };
+ });
- if (!searchHandler) {
- return Response.json({ message: 'Invalid focus mode' }, { status: 400 });
- }
+ const session = SessionManager.createSession();
- const emitter = await searchHandler.searchAndAnswer(
- body.query,
- history,
- llm,
- embeddings,
- body.optimizationMode,
- [],
- body.systemInstructions || '',
- );
+ const agent = new APISearchAgent();
+
+ agent.searchAsync(session, {
+ chatHistory: history,
+ config: {
+ embedding: embeddings,
+ llm: llm,
+ sources: body.sources,
+ mode: body.optimizationMode,
+ fileIds: [],
+ systemInstructions: body.systemInstructions || '',
+ },
+ followUp: body.query,
+ chatId: crypto.randomUUID(),
+ messageId: crypto.randomUUID(),
+ });
if (!body.stream) {
return new Promise(
@@ -71,36 +75,37 @@ export const POST = async (req: Request) => {
let message = '';
let sources: any[] = [];
- emitter.on('data', (data: string) => {
- try {
- const parsedData = JSON.parse(data);
- if (parsedData.type === 'response') {
- message += parsedData.data;
- } else if (parsedData.type === 'sources') {
- sources = parsedData.data;
+ session.subscribe((event: string, data: Record) => {
+ if (event === 'data') {
+ try {
+ if (data.type === 'response') {
+ message += data.data;
+ } else if (data.type === 'searchResults') {
+ sources = data.data;
+ }
+ } catch (error) {
+ reject(
+ Response.json(
+ { message: 'Error parsing data' },
+ { status: 500 },
+ ),
+ );
}
- } catch (error) {
+ }
+
+ if (event === 'end') {
+ resolve(Response.json({ message, sources }, { status: 200 }));
+ }
+
+ if (event === 'error') {
reject(
Response.json(
- { message: 'Error parsing data' },
+ { message: 'Search error', error: data },
{ status: 500 },
),
);
}
});
-
- emitter.on('end', () => {
- resolve(Response.json({ message, sources }, { status: 200 }));
- });
-
- emitter.on('error', (error: any) => {
- reject(
- Response.json(
- { message: 'Search error', error },
- { status: 500 },
- ),
- );
- });
},
);
}
@@ -124,61 +129,61 @@ export const POST = async (req: Request) => {
);
signal.addEventListener('abort', () => {
- emitter.removeAllListeners();
+ session.removeAllListeners();
try {
controller.close();
} catch (error) {}
});
- emitter.on('data', (data: string) => {
- if (signal.aborted) return;
+ session.subscribe((event: string, data: Record) => {
+ if (event === 'data') {
+ if (signal.aborted) return;
- try {
- const parsedData = JSON.parse(data);
-
- if (parsedData.type === 'response') {
- controller.enqueue(
- encoder.encode(
- JSON.stringify({
- type: 'response',
- data: parsedData.data,
- }) + '\n',
- ),
- );
- } else if (parsedData.type === 'sources') {
- sources = parsedData.data;
- controller.enqueue(
- encoder.encode(
- JSON.stringify({
- type: 'sources',
- data: sources,
- }) + '\n',
- ),
- );
+ try {
+ if (data.type === 'response') {
+ controller.enqueue(
+ encoder.encode(
+ JSON.stringify({
+ type: 'response',
+ data: data.data,
+ }) + '\n',
+ ),
+ );
+ } else if (data.type === 'searchResults') {
+ sources = data.data;
+ controller.enqueue(
+ encoder.encode(
+ JSON.stringify({
+ type: 'sources',
+ data: sources,
+ }) + '\n',
+ ),
+ );
+ }
+ } catch (error) {
+ controller.error(error);
}
- } catch (error) {
- controller.error(error);
}
- });
- emitter.on('end', () => {
- if (signal.aborted) return;
+ if (event === 'end') {
+ if (signal.aborted) return;
- controller.enqueue(
- encoder.encode(
- JSON.stringify({
- type: 'done',
- }) + '\n',
- ),
- );
- controller.close();
- });
+ controller.enqueue(
+ encoder.encode(
+ JSON.stringify({
+ type: 'done',
+ }) + '\n',
+ ),
+ );
+ controller.close();
+ }
- emitter.on('error', (error: any) => {
- if (signal.aborted) return;
+ if (event === 'error') {
+ if (signal.aborted) return;
- controller.error(error);
+ controller.error(data);
+ }
});
},
cancel() {
diff --git a/src/app/api/suggestions/route.ts b/src/app/api/suggestions/route.ts
index d8312cf..07432d6 100644
--- a/src/app/api/suggestions/route.ts
+++ b/src/app/api/suggestions/route.ts
@@ -1,8 +1,6 @@
-import generateSuggestions from '@/lib/chains/suggestionGeneratorAgent';
+import generateSuggestions from '@/lib/agents/suggestions';
import ModelRegistry from '@/lib/models/registry';
import { ModelWithProvider } from '@/lib/models/types';
-import { BaseChatModel } from '@langchain/core/language_models/chat_models';
-import { AIMessage, BaseMessage, HumanMessage } from '@langchain/core/messages';
interface SuggestionsGenerationBody {
chatHistory: any[];
@@ -13,16 +11,6 @@ export const POST = async (req: Request) => {
try {
const body: SuggestionsGenerationBody = await req.json();
- const chatHistory = body.chatHistory
- .map((msg: any) => {
- if (msg.role === 'user') {
- return new HumanMessage(msg.content);
- } else if (msg.role === 'assistant') {
- return new AIMessage(msg.content);
- }
- })
- .filter((msg) => msg !== undefined) as BaseMessage[];
-
const registry = new ModelRegistry();
const llm = await registry.loadChatModel(
@@ -32,7 +20,7 @@ export const POST = async (req: Request) => {
const suggestions = await generateSuggestions(
{
- chat_history: chatHistory,
+ chatHistory: body.chatHistory,
},
llm,
);
diff --git a/src/app/api/uploads/route.ts b/src/app/api/uploads/route.ts
index 2a275f4..9cac0f7 100644
--- a/src/app/api/uploads/route.ts
+++ b/src/app/api/uploads/route.ts
@@ -1,39 +1,16 @@
import { NextResponse } from 'next/server';
-import fs from 'fs';
-import path from 'path';
-import crypto from 'crypto';
-import { PDFLoader } from '@langchain/community/document_loaders/fs/pdf';
-import { DocxLoader } from '@langchain/community/document_loaders/fs/docx';
-import { RecursiveCharacterTextSplitter } from '@langchain/textsplitters';
-import { Document } from '@langchain/core/documents';
import ModelRegistry from '@/lib/models/registry';
-
-interface FileRes {
- fileName: string;
- fileExtension: string;
- fileId: string;
-}
-
-const uploadDir = path.join(process.cwd(), 'uploads');
-
-if (!fs.existsSync(uploadDir)) {
- fs.mkdirSync(uploadDir, { recursive: true });
-}
-
-const splitter = new RecursiveCharacterTextSplitter({
- chunkSize: 500,
- chunkOverlap: 100,
-});
+import UploadManager from '@/lib/uploads/manager';
export async function POST(req: Request) {
try {
const formData = await req.formData();
const files = formData.getAll('files') as File[];
- const embedding_model = formData.get('embedding_model_key') as string;
- const embedding_model_provider = formData.get('embedding_model_provider_id') as string;
+ const embeddingModel = formData.get('embedding_model_key') as string;
+ const embeddingModelProvider = formData.get('embedding_model_provider_id') as string;
- if (!embedding_model || !embedding_model_provider) {
+ if (!embeddingModel || !embeddingModelProvider) {
return NextResponse.json(
{ message: 'Missing embedding model or provider' },
{ status: 400 },
@@ -42,73 +19,13 @@ export async function POST(req: Request) {
const registry = new ModelRegistry();
- const model = await registry.loadEmbeddingModel(embedding_model_provider, embedding_model);
+ const model = await registry.loadEmbeddingModel(embeddingModelProvider, embeddingModel);
+
+ const uploadManager = new UploadManager({
+ embeddingModel: model,
+ })
- const processedFiles: FileRes[] = [];
-
- await Promise.all(
- files.map(async (file: any) => {
- const fileExtension = file.name.split('.').pop();
- if (!['pdf', 'docx', 'txt'].includes(fileExtension!)) {
- return NextResponse.json(
- { message: 'File type not supported' },
- { status: 400 },
- );
- }
-
- const uniqueFileName = `${crypto.randomBytes(16).toString('hex')}.${fileExtension}`;
- const filePath = path.join(uploadDir, uniqueFileName);
-
- const buffer = Buffer.from(await file.arrayBuffer());
- fs.writeFileSync(filePath, new Uint8Array(buffer));
-
- let docs: any[] = [];
- if (fileExtension === 'pdf') {
- const loader = new PDFLoader(filePath);
- docs = await loader.load();
- } else if (fileExtension === 'docx') {
- const loader = new DocxLoader(filePath);
- docs = await loader.load();
- } else if (fileExtension === 'txt') {
- const text = fs.readFileSync(filePath, 'utf-8');
- docs = [
- new Document({ pageContent: text, metadata: { title: file.name } }),
- ];
- }
-
- const splitted = await splitter.splitDocuments(docs);
-
- const extractedDataPath = filePath.replace(/\.\w+$/, '-extracted.json');
- fs.writeFileSync(
- extractedDataPath,
- JSON.stringify({
- title: file.name,
- contents: splitted.map((doc) => doc.pageContent),
- }),
- );
-
- const embeddings = await model.embedDocuments(
- splitted.map((doc) => doc.pageContent),
- );
- const embeddingsDataPath = filePath.replace(
- /\.\w+$/,
- '-embeddings.json',
- );
- fs.writeFileSync(
- embeddingsDataPath,
- JSON.stringify({
- title: file.name,
- embeddings,
- }),
- );
-
- processedFiles.push({
- fileName: file.name,
- fileExtension: fileExtension,
- fileId: uniqueFileName.replace(/\.\w+$/, ''),
- });
- }),
- );
+ const processedFiles = await uploadManager.processFiles(files);
return NextResponse.json({
files: processedFiles,
diff --git a/src/app/api/videos/route.ts b/src/app/api/videos/route.ts
index 02e5909..0d5e03c 100644
--- a/src/app/api/videos/route.ts
+++ b/src/app/api/videos/route.ts
@@ -1,7 +1,6 @@
-import handleVideoSearch from '@/lib/chains/videoSearchAgent';
+import handleVideoSearch from '@/lib/agents/media/video';
import ModelRegistry from '@/lib/models/registry';
import { ModelWithProvider } from '@/lib/models/types';
-import { AIMessage, BaseMessage, HumanMessage } from '@langchain/core/messages';
interface VideoSearchBody {
query: string;
@@ -13,16 +12,6 @@ export const POST = async (req: Request) => {
try {
const body: VideoSearchBody = await req.json();
- const chatHistory = body.chatHistory
- .map((msg: any) => {
- if (msg.role === 'user') {
- return new HumanMessage(msg.content);
- } else if (msg.role === 'assistant') {
- return new AIMessage(msg.content);
- }
- })
- .filter((msg) => msg !== undefined) as BaseMessage[];
-
const registry = new ModelRegistry();
const llm = await registry.loadChatModel(
@@ -32,7 +21,7 @@ export const POST = async (req: Request) => {
const videos = await handleVideoSearch(
{
- chat_history: chatHistory,
+ chatHistory: body.chatHistory,
query: body.query,
},
llm,
diff --git a/src/app/c/[chatId]/page.tsx b/src/app/c/[chatId]/page.tsx
index 39b93f0..06cd823 100644
--- a/src/app/c/[chatId]/page.tsx
+++ b/src/app/c/[chatId]/page.tsx
@@ -1,10 +1,5 @@
'use client';
import ChatWindow from '@/components/ChatWindow';
-import React from 'react';
-const Page = () => {
- return ;
-};
-
-export default Page;
+export default ChatWindow;
diff --git a/src/app/layout.tsx b/src/app/layout.tsx
index e9fd8c7..535a0e0 100644
--- a/src/app/layout.tsx
+++ b/src/app/layout.tsx
@@ -34,7 +34,7 @@ export default function RootLayout({
return (
-
+
{setupComplete ? (
diff --git a/src/app/library/page.tsx b/src/app/library/page.tsx
index 9c40b2b..3eb923e 100644
--- a/src/app/library/page.tsx
+++ b/src/app/library/page.tsx
@@ -1,8 +1,8 @@
'use client';
import DeleteChat from '@/components/DeleteChat';
-import { cn, formatTimeDifference } from '@/lib/utils';
-import { BookOpenText, ClockIcon, Delete, ScanEye } from 'lucide-react';
+import { formatTimeDifference } from '@/lib/utils';
+import { BookOpenText, ClockIcon, FileText, Globe2Icon } from 'lucide-react';
import Link from 'next/link';
import { useEffect, useState } from 'react';
@@ -10,7 +10,8 @@ export interface Chat {
id: string;
title: string;
createdAt: string;
- focusMode: string;
+ sources: string[];
+ files: { fileId: string; name: string }[];
}
const Page = () => {
@@ -37,74 +38,137 @@ const Page = () => {
fetchChats();
}, []);
- return loading ? (
-
- ) : (
+ return (
-
- {chats.length === 0 && (
-
- )}
- {chats.length > 0 && (
-
- {chats.map((chat, i) => (
-
-
+
+
+
+
+
- {chat.title}
-
-
-
-
-
- {formatTimeDifference(new Date(), chat.createdAt)} Ago
-
-
-
+ Library
+
+
+ Past chats, sources, and uploads.
- ))}
+
+
+
+
+
+ {loading
+ ? 'Loading…'
+ : `${chats.length} ${chats.length === 1 ? 'chat' : 'chats'}`}
+
+
+
+
+
+ {loading ? (
+
+ ) : chats.length === 0 ? (
+
+
+
+
+
+ No chats found.
+
+
+
+ Start a new chat
+ {' '}
+ to see it listed here.
+
+
+ ) : (
+
+
+ {chats.map((chat, index) => {
+ const sourcesLabel =
+ chat.sources.length === 0
+ ? null
+ : chat.sources.length <= 2
+ ? chat.sources
+ .map((s) => s.charAt(0).toUpperCase() + s.slice(1))
+ .join(', ')
+ : `${chat.sources
+ .slice(0, 2)
+ .map((s) => s.charAt(0).toUpperCase() + s.slice(1))
+ .join(', ')} + ${chat.sources.length - 2}`;
+
+ return (
+
+
+
+ {chat.title}
+
+
+
+
+
+
+
+
+
+ {formatTimeDifference(new Date(), chat.createdAt)} Ago
+
+
+ {sourcesLabel && (
+
+
+ {sourcesLabel}
+
+ )}
+ {chat.files.length > 0 && (
+
+
+ {chat.files.length}{' '}
+ {chat.files.length === 1 ? 'file' : 'files'}
+
+ )}
+
+
+ );
+ })}
+
)}
diff --git a/src/components/AssistantSteps.tsx b/src/components/AssistantSteps.tsx
new file mode 100644
index 0000000..c715a92
--- /dev/null
+++ b/src/components/AssistantSteps.tsx
@@ -0,0 +1,266 @@
+'use client';
+
+import {
+ Brain,
+ Search,
+ FileText,
+ ChevronDown,
+ ChevronUp,
+ BookSearch,
+} from 'lucide-react';
+import { motion, AnimatePresence } from 'framer-motion';
+import { useEffect, useState } from 'react';
+import { ResearchBlock, ResearchBlockSubStep } from '@/lib/types';
+import { useChat } from '@/lib/hooks/useChat';
+
+const getStepIcon = (step: ResearchBlockSubStep) => {
+ if (step.type === 'reasoning') {
+ return
;
+ } else if (step.type === 'searching' || step.type === 'upload_searching') {
+ return
;
+ } else if (
+ step.type === 'search_results' ||
+ step.type === 'upload_search_results'
+ ) {
+ return
;
+ } else if (step.type === 'reading') {
+ return
;
+ }
+
+ return null;
+};
+
+const getStepTitle = (
+ step: ResearchBlockSubStep,
+ isStreaming: boolean,
+): string => {
+ if (step.type === 'reasoning') {
+ return isStreaming && !step.reasoning ? 'Thinking...' : 'Thinking';
+ } else if (step.type === 'searching') {
+ return `Searching ${step.searching.length} ${step.searching.length === 1 ? 'query' : 'queries'}`;
+ } else if (step.type === 'search_results') {
+ return `Found ${step.reading.length} ${step.reading.length === 1 ? 'result' : 'results'}`;
+ } else if (step.type === 'reading') {
+ return `Reading ${step.reading.length} ${step.reading.length === 1 ? 'source' : 'sources'}`;
+ } else if (step.type === 'upload_searching') {
+ return 'Scanning your uploaded documents';
+ } else if (step.type === 'upload_search_results') {
+ return `Reading ${step.results.length} ${step.results.length === 1 ? 'document' : 'documents'}`;
+ }
+
+ return 'Processing';
+};
+
+const AssistantSteps = ({
+ block,
+ status,
+ isLast,
+}: {
+ block: ResearchBlock;
+ status: 'answering' | 'completed' | 'error';
+ isLast: boolean;
+}) => {
+ const [isExpanded, setIsExpanded] = useState(
+ isLast && status === 'answering' ? true : false,
+ );
+ const { researchEnded, loading } = useChat();
+
+ useEffect(() => {
+ if (researchEnded && isLast) {
+ setIsExpanded(false);
+ } else if (status === 'answering' && isLast) {
+ setIsExpanded(true);
+ }
+ }, [researchEnded, status]);
+
+ if (!block || block.data.subSteps.length === 0) return null;
+
+ return (
+
+
setIsExpanded(!isExpanded)}
+ className="w-full flex items-center justify-between p-3 hover:bg-light-200 dark:hover:bg-dark-200 transition duration-200"
+ >
+
+
+
+ Research Progress ({block.data.subSteps.length}{' '}
+ {block.data.subSteps.length === 1 ? 'step' : 'steps'})
+
+
+ {isExpanded ? (
+
+ ) : (
+
+ )}
+
+
+
+ {isExpanded && (
+
+
+ {block.data.subSteps.map((step, index) => {
+ const isLastStep = index === block.data.subSteps.length - 1;
+ const isStreaming = loading && isLastStep && !researchEnded;
+
+ return (
+
+
+
+ {getStepIcon(step)}
+
+ {index < block.data.subSteps.length - 1 && (
+
+ )}
+
+
+
+
+ {getStepTitle(step, isStreaming)}
+
+
+ {step.type === 'reasoning' && (
+ <>
+ {step.reasoning && (
+
+ {step.reasoning}
+
+ )}
+ {isStreaming && !step.reasoning && (
+
+ )}
+ >
+ )}
+
+ {step.type === 'searching' &&
+ step.searching.length > 0 && (
+
+ {step.searching.map((query, idx) => (
+
+ {query}
+
+ ))}
+
+ )}
+
+ {(step.type === 'search_results' ||
+ step.type === 'reading') &&
+ step.reading.length > 0 && (
+
+ )}
+
+ {step.type === 'upload_searching' &&
+ step.queries.length > 0 && (
+
+ {step.queries.map((query, idx) => (
+
+ {query}
+
+ ))}
+
+ )}
+
+ {step.type === 'upload_search_results' &&
+ step.results.length > 0 && (
+
+ {step.results.slice(0, 4).map((result, idx) => {
+ const title =
+ (result.metadata &&
+ (result.metadata.title ||
+ result.metadata.fileName)) ||
+ 'Untitled document';
+
+ return (
+
+ );
+ })}
+
+ )}
+
+
+ );
+ })}
+
+
+ )}
+
+
+ );
+};
+
+export default AssistantSteps;
diff --git a/src/components/Chat.tsx b/src/components/Chat.tsx
index 22e0a48..1c95d26 100644
--- a/src/components/Chat.tsx
+++ b/src/components/Chat.tsx
@@ -7,11 +7,12 @@ import MessageBoxLoading from './MessageBoxLoading';
import { useChat } from '@/lib/hooks/useChat';
const Chat = () => {
- const { sections, chatTurns, loading, messageAppeared } = useChat();
+ const { sections, loading, messageAppeared, messages } = useChat();
const [dividerWidth, setDividerWidth] = useState(0);
const dividerRef = useRef
(null);
const messageEnd = useRef(null);
+ const lastScrolledRef = useRef(0);
useEffect(() => {
const updateDividerWidth = () => {
@@ -22,43 +23,48 @@ const Chat = () => {
updateDividerWidth();
+ const resizeObserver = new ResizeObserver(() => {
+ updateDividerWidth();
+ });
+
+ const currentRef = dividerRef.current;
+ if (currentRef) {
+ resizeObserver.observe(currentRef);
+ }
+
window.addEventListener('resize', updateDividerWidth);
return () => {
+ if (currentRef) {
+ resizeObserver.unobserve(currentRef);
+ }
+ resizeObserver.disconnect();
window.removeEventListener('resize', updateDividerWidth);
};
- }, []);
+ }, [sections.length]);
useEffect(() => {
const scroll = () => {
messageEnd.current?.scrollIntoView({ behavior: 'auto' });
};
- if (chatTurns.length === 1) {
- document.title = `${chatTurns[0].content.substring(0, 30)} - Perplexica`;
+ if (messages.length === 1) {
+ document.title = `${messages[0].query.substring(0, 30)} - Perplexica`;
}
- const messageEndBottom =
- messageEnd.current?.getBoundingClientRect().bottom ?? 0;
-
- const distanceFromMessageEnd = window.innerHeight - messageEndBottom;
-
- if (distanceFromMessageEnd >= -100) {
+ if (sections.length > lastScrolledRef.current) {
scroll();
+ lastScrolledRef.current = sections.length;
}
-
- if (chatTurns[chatTurns.length - 1]?.role === 'user') {
- scroll();
- }
- }, [chatTurns]);
+ }, [messages]);
return (
-
+
{sections.map((section, i) => {
const isLast = i === sections.length - 1;
return (
-
+
{
{loading && !messageAppeared && }
{dividerWidth > 0 && (
-
+
)}
diff --git a/src/components/ChatWindow.tsx b/src/components/ChatWindow.tsx
index c04b4ea..a2a9f67 100644
--- a/src/components/ChatWindow.tsx
+++ b/src/components/ChatWindow.tsx
@@ -1,15 +1,13 @@
'use client';
-import { Document } from '@langchain/core/documents';
import Navbar from './Navbar';
import Chat from './Chat';
import EmptyChat from './EmptyChat';
-import { Settings } from 'lucide-react';
-import Link from 'next/link';
import NextError from 'next/error';
import { useChat } from '@/lib/hooks/useChat';
-import Loader from './ui/Loader';
import SettingsButtonMobile from './Settings/SettingsButtonMobile';
+import { Block } from '@/lib/types';
+import Loader from './ui/Loader';
export interface BaseMessage {
chatId: string;
@@ -17,42 +15,27 @@ export interface BaseMessage {
createdAt: Date;
}
-export interface AssistantMessage extends BaseMessage {
- role: 'assistant';
- content: string;
- suggestions?: string[];
+export interface Message extends BaseMessage {
+ backendId: string;
+ query: string;
+ responseBlocks: Block[];
+ status: 'answering' | 'completed' | 'error';
}
-export interface UserMessage extends BaseMessage {
- role: 'user';
- content: string;
-}
-
-export interface SourceMessage extends BaseMessage {
- role: 'source';
- sources: Document[];
-}
-
-export interface SuggestionMessage extends BaseMessage {
- role: 'suggestion';
- suggestions: string[];
-}
-
-export type Message =
- | AssistantMessage
- | UserMessage
- | SourceMessage
- | SuggestionMessage;
-export type ChatTurn = UserMessage | AssistantMessage;
-
export interface File {
fileName: string;
fileExtension: string;
fileId: string;
}
+export interface Widget {
+ widgetType: string;
+ params: Record
;
+}
+
const ChatWindow = () => {
- const { hasError, isReady, notFound, messages } = useChat();
+ const { hasError, notFound, messages, isReady } = useChat();
+
if (hasError) {
return (
@@ -84,7 +67,7 @@ const ChatWindow = () => {
)
) : (
-
+
);
diff --git a/src/components/EmptyChat.tsx b/src/components/EmptyChat.tsx
index d9b6686..775fc9d 100644
--- a/src/components/EmptyChat.tsx
+++ b/src/components/EmptyChat.tsx
@@ -1,3 +1,6 @@
+'use client';
+
+import { useEffect, useState } from 'react';
import { Settings } from 'lucide-react';
import EmptyChatMessageInput from './EmptyChatMessageInput';
import { File } from './ChatWindow';
@@ -5,8 +8,39 @@ import Link from 'next/link';
import WeatherWidget from './WeatherWidget';
import NewsArticleWidget from './NewsArticleWidget';
import SettingsButtonMobile from '@/components/Settings/SettingsButtonMobile';
+import {
+ getShowNewsWidget,
+ getShowWeatherWidget,
+} from '@/lib/config/clientRegistry';
const EmptyChat = () => {
+ const [showWeather, setShowWeather] = useState(() =>
+ typeof window !== 'undefined' ? getShowWeatherWidget() : true,
+ );
+ const [showNews, setShowNews] = useState(() =>
+ typeof window !== 'undefined' ? getShowNewsWidget() : true,
+ );
+
+ useEffect(() => {
+ const updateWidgetVisibility = () => {
+ setShowWeather(getShowWeatherWidget());
+ setShowNews(getShowNewsWidget());
+ };
+
+ updateWidgetVisibility();
+
+ window.addEventListener('client-config-changed', updateWidgetVisibility);
+ window.addEventListener('storage', updateWidgetVisibility);
+
+ return () => {
+ window.removeEventListener(
+ 'client-config-changed',
+ updateWidgetVisibility,
+ );
+ window.removeEventListener('storage', updateWidgetVisibility);
+ };
+ }, []);
+
return (
@@ -19,14 +53,20 @@ const EmptyChat = () => {
-
-
-
+ {(showWeather || showNews) && (
+
+ {showWeather && (
+
+
+
+ )}
+ {showNews && (
+
+
+
+ )}
-
-
-
-
+ )}
);
diff --git a/src/components/EmptyChatMessageInput.tsx b/src/components/EmptyChatMessageInput.tsx
index 770c647..6d159f9 100644
--- a/src/components/EmptyChatMessageInput.tsx
+++ b/src/components/EmptyChatMessageInput.tsx
@@ -1,7 +1,7 @@
import { ArrowRight } from 'lucide-react';
import { useEffect, useRef, useState } from 'react';
import TextareaAutosize from 'react-textarea-autosize';
-import Focus from './MessageInputActions/Focus';
+import Sources from './MessageInputActions/Sources';
import Optimization from './MessageInputActions/Optimization';
import Attach from './MessageInputActions/Attach';
import { useChat } from '@/lib/hooks/useChat';
@@ -68,8 +68,8 @@ const EmptyChatMessageInput = () => {
{
- const contentToCopy = `${initialMessage}${section?.sourceMessage?.sources && section.sourceMessage.sources.length > 0 && `\n\nCitations:\n${section.sourceMessage.sources?.map((source: any, i: any) => `[${i + 1}] ${source.metadata.url}`).join(`\n`)}`}`;
+ const sources = section.message.responseBlocks.filter(
+ (b) => b.type === 'source' && b.data.length > 0,
+ ) as SourceBlock[];
+
+ const contentToCopy = `${initialMessage}${
+ sources.length > 0
+ ? `\n\nCitations:\n${sources
+ .map((source) => source.data)
+ .flat()
+ .map(
+ (s, i) =>
+ `[${i + 1}] ${s.metadata.url.startsWith('file_id://') ? s.metadata.fileName || 'Uploaded File' : s.metadata.url}`,
+ )
+ .join(`\n`)}`
+ : ''
+ }`;
+
navigator.clipboard.writeText(contentToCopy);
+
setCopied(true);
setTimeout(() => setCopied(false), 1000);
}}
- className="p-2 text-black/70 dark:text-white/70 rounded-xl hover:bg-light-secondary dark:hover:bg-dark-secondary transition duration-200 hover:text-black dark:hover:text-white"
+ className="p-2 text-black/70 dark:text-white/70 rounded-full hover:bg-light-secondary dark:hover:bg-dark-secondary transition duration-200 hover:text-black dark:hover:text-white"
>
- {copied ? : }
+ {copied ? : }
);
};
diff --git a/src/components/MessageActions/Rewrite.tsx b/src/components/MessageActions/Rewrite.tsx
index 80fadb3..3902e1e 100644
--- a/src/components/MessageActions/Rewrite.tsx
+++ b/src/components/MessageActions/Rewrite.tsx
@@ -1,4 +1,4 @@
-import { ArrowLeftRight } from 'lucide-react';
+import { ArrowLeftRight, Repeat } from 'lucide-react';
const Rewrite = ({
rewrite,
@@ -10,12 +10,11 @@ const Rewrite = ({
return (
rewrite(messageId)}
- className="py-2 px-3 text-black/70 dark:text-white/70 rounded-xl hover:bg-light-secondary dark:hover:bg-dark-secondary transition duration-200 hover:text-black dark:hover:text-white flex flex-row items-center space-x-1"
+ className="p-2 text-black/70 dark:text-white/70 rounded-full hover:bg-light-secondary dark:hover:bg-dark-secondary transition duration-200 hover:text-black dark:hover:text-white flex flex-row items-center space-x-1"
>
-
- Rewrite
+
);
};
-
+1;
export default Rewrite;
diff --git a/src/components/MessageBox.tsx b/src/components/MessageBox.tsx
index 062bb90..19e3546 100644
--- a/src/components/MessageBox.tsx
+++ b/src/components/MessageBox.tsx
@@ -10,8 +10,9 @@ import {
StopCircle,
Layers3,
Plus,
+ CornerDownRight,
} from 'lucide-react';
-import Markdown, { MarkdownToJSX } from 'markdown-to-jsx';
+import Markdown, { MarkdownToJSX, RuleType } from 'markdown-to-jsx';
import Copy from './MessageActions/Copy';
import Rewrite from './MessageActions/Rewrite';
import MessageSources from './MessageSources';
@@ -20,7 +21,11 @@ import SearchVideos from './SearchVideos';
import { useSpeech } from 'react-text-to-speech';
import ThinkBox from './ThinkBox';
import { useChat, Section } from '@/lib/hooks/useChat';
-import Citation from './Citation';
+import Citation from './MessageRenderer/Citation';
+import AssistantSteps from './AssistantSteps';
+import { ResearchBlock } from '@/lib/types';
+import Renderer from './Widgets/Renderer';
+import CodeBlock from './MessageRenderer/CodeBlock';
const ThinkTagProcessor = ({
children,
@@ -45,15 +50,39 @@ const MessageBox = ({
dividerRef?: MutableRefObject
;
isLast: boolean;
}) => {
- const { loading, chatTurns, sendMessage, rewrite } = useChat();
+ const { loading, sendMessage, rewrite, messages, researchEnded } = useChat();
- const parsedMessage = section.parsedAssistantMessage || '';
+ const parsedMessage = section.parsedTextBlocks.join('\n\n');
const speechMessage = section.speechMessage || '';
const thinkingEnded = section.thinkingEnded;
+ const sourceBlocks = section.message.responseBlocks.filter(
+ (block): block is typeof block & { type: 'source' } =>
+ block.type === 'source',
+ );
+
+ const sources = sourceBlocks.flatMap((block) => block.data);
+
+ const hasContent = section.parsedTextBlocks.length > 0;
+
const { speechStatus, start, stop } = useSpeech({ text: speechMessage });
const markdownOverrides: MarkdownToJSX.Options = {
+ renderRule(next, node, renderChildren, state) {
+ if (node.type === RuleType.codeInline) {
+ return `\`${node.text}\``;
+ }
+
+ if (node.type === RuleType.codeBlock) {
+ return (
+
+ {node.text}
+
+ );
+ }
+
+ return next();
+ },
overrides: {
think: {
component: ThinkTagProcessor,
@@ -71,7 +100,7 @@ const MessageBox = ({
- {section.userMessage.content}
+ {section.message.query}
@@ -80,21 +109,51 @@ const MessageBox = ({
ref={dividerRef}
className="flex flex-col space-y-6 w-full lg:w-9/12"
>
- {section.sourceMessage &&
- section.sourceMessage.sources.length > 0 && (
-
-
-
-
- Sources
-
-
-
+ {sources.length > 0 && (
+
+ )}
+
+ {section.message.responseBlocks
+ .filter(
+ (block): block is ResearchBlock =>
+ block.type === 'research' && block.data.subSteps.length > 0,
+ )
+ .map((researchBlock) => (
+
+ ))}
+
+ {isLast &&
+ loading &&
+ !researchEnded &&
+ !section.message.responseBlocks.some(
+ (b) => b.type === 'research' && b.data.subSteps.length > 0,
+ ) && (
+
+
+
+ Brainstorming...
+
)}
+ {section.widgets.length > 0 &&
}
+
- {section.sourceMessage && (
+ {sources.length > 0 && (
)}
- {section.assistantMessage && (
+ {hasContent && (
<>
{loading && isLast ? null : (
-
-
+
+
-
-
+
+
{
if (speechStatus === 'started') {
@@ -142,12 +198,12 @@ const MessageBox = ({
start();
}
}}
- className="p-2 text-black/70 dark:text-white/70 rounded-xl hover:bg-light-secondary dark:hover:bg-dark-secondary transition duration-200 hover:text-black dark:hover:text-white"
+ className="p-2 text-black/70 dark:text-white/70 rounded-full hover:bg-light-secondary dark:hover:bg-dark-secondary transition duration-200 hover:text-black dark:hover:text-white"
>
{speechStatus === 'started' ? (
-
+
) : (
-
+
)}
@@ -157,9 +213,9 @@ const MessageBox = ({
{isLast &&
section.suggestions &&
section.suggestions.length > 0 &&
- section.assistantMessage &&
+ hasContent &&
!loading && (
-
+
(
- {i > 0 && (
-
- )}
+
sendMessage(suggestion)}
- className="group w-full px-3 py-4 text-left transition-colors duration-200"
+ className="group w-full py-4 text-left transition-colors duration-200"
>
@@ -201,17 +261,17 @@ const MessageBox = ({
- {section.assistantMessage && (
+ {hasContent && (
)}
diff --git a/src/components/MessageInput.tsx b/src/components/MessageInput.tsx
index d1fc989..56054eb 100644
--- a/src/components/MessageInput.tsx
+++ b/src/components/MessageInput.tsx
@@ -2,9 +2,6 @@ import { cn } from '@/lib/utils';
import { ArrowUp } from 'lucide-react';
import { useEffect, useRef, useState } from 'react';
import TextareaAutosize from 'react-textarea-autosize';
-import Attach from './MessageInputActions/Attach';
-import CopilotToggle from './MessageInputActions/Copilot';
-import { File } from './ChatWindow';
import AttachSmall from './MessageInputActions/AttachSmall';
import { useChat } from '@/lib/hooks/useChat';
@@ -64,7 +61,7 @@ const MessageInput = () => {
}
}}
className={cn(
- 'bg-light-secondary dark:bg-dark-secondary p-4 flex items-center overflow-hidden border border-light-200 dark:border-dark-200 shadow-sm shadow-light-200/10 dark:shadow-black/20 transition-all duration-200 focus-within:border-light-300 dark:focus-within:border-dark-300',
+ 'relative bg-light-secondary dark:bg-dark-secondary p-4 flex items-center overflow-visible border border-light-200 dark:border-dark-200 shadow-sm shadow-light-200/10 dark:shadow-black/20 transition-all duration-200 focus-within:border-light-300 dark:focus-within:border-dark-300',
mode === 'multi' ? 'flex-col rounded-2xl' : 'flex-row rounded-full',
)}
>
@@ -80,11 +77,16 @@ const MessageInput = () => {
placeholder="Ask a follow-up"
/>
{mode === 'single' && (
-
-
+
+
+
+ )}
+ {mode === 'multi' && (
+
)}
- {mode === 'multi' && (
-
- )}
);
};
diff --git a/src/components/MessageInputActions/Attach.tsx b/src/components/MessageInputActions/Attach.tsx
index fbc2e7e..84d7152 100644
--- a/src/components/MessageInputActions/Attach.tsx
+++ b/src/components/MessageInputActions/Attach.tsx
@@ -16,6 +16,8 @@ import {
} from 'lucide-react';
import { Fragment, useRef, useState } from 'react';
import { useChat } from '@/lib/hooks/useChat';
+import { AnimatePresence } from 'motion/react';
+import { motion } from 'framer-motion';
const Attach = () => {
const { files, setFiles, setFileIds, fileIds } = useChat();
@@ -53,86 +55,95 @@ const Attach = () => {
return loading ? (
-
+
) : files.length > 0 ? (
-
-
-
-
-
-
-
-
- Attached files
-
-
-
fileInputRef.current.click()}
- className="flex flex-row items-center space-x-1 text-black/70 dark:text-white/70 hover:text-black hover:dark:text-white transition duration-200 focus:outline-none"
+ {({ open }) => (
+ <>
+
+
+
+
+ {open && (
+
+
-
-
- Add
-
-
{
- setFiles([]);
- setFileIds([]);
- }}
- className="flex flex-row items-center space-x-1 text-black/70 dark:text-white/70 hover:text-black hover:dark:text-white transition duration-200 focus:outline-none"
- >
-
- Clear
-
-
-
-
-
- {files.map((file, i) => (
-
-
-
+
+
+ Attached files
+
+
+
fileInputRef.current.click()}
+ className="flex flex-row items-center space-x-1 text-black/70 dark:text-white/70 hover:text-black hover:dark:text-white transition duration-200 focus:outline-none"
+ >
+
+
+ Add
+
+
{
+ setFiles([]);
+ setFileIds([]);
+ }}
+ className="flex flex-row items-center space-x-1 text-black/70 dark:text-white/70 hover:text-black hover:dark:text-white transition duration-200 focus:outline-none"
+ >
+
+ Clear
+
+
-
- {file.fileName.length > 25
- ? file.fileName.replace(/\.\w+$/, '').substring(0, 25) +
- '...' +
- file.fileExtension
- : file.fileName}
-
-
- ))}
-
-
-
-
+
+
+ {files.map((file, i) => (
+
+
+
+
+
+ {file.fileName.length > 25
+ ? file.fileName
+ .replace(/\.\w+$/, '')
+ .substring(0, 25) +
+ '...' +
+ file.fileExtension
+ : file.fileName}
+
+
+ ))}
+
+
+
+ )}
+
+ >
+ )}
) : (
{
const { files, setFiles, setFileIds, fileIds } = useChat();
@@ -53,86 +46,95 @@ const AttachSmall = () => {
return loading ? (
-
+
) : files.length > 0 ? (
-
-
-
-
-
-
-
-
- Attached files
-
-
-
fileInputRef.current.click()}
- className="flex flex-row items-center space-x-1 text-black/70 dark:text-white/70 hover:text-black hover:dark:text-white transition duration-200"
+ {({ open }) => (
+ <>
+
+
+
+
+ {open && (
+
+
-
-
- Add
-
-
{
- setFiles([]);
- setFileIds([]);
- }}
- className="flex flex-row items-center space-x-1 text-black/70 dark:text-white/70 hover:text-black hover:dark:text-white transition duration-200"
- >
-
- Clear
-
-
-
-
-
- {files.map((file, i) => (
-
-
-
+
+
+ Attached files
+
+
+
fileInputRef.current.click()}
+ className="flex flex-row items-center space-x-1 text-black/70 dark:text-white/70 hover:text-black hover:dark:text-white transition duration-200"
+ >
+
+
+ Add
+
+
{
+ setFiles([]);
+ setFileIds([]);
+ }}
+ className="flex flex-row items-center space-x-1 text-black/70 dark:text-white/70 hover:text-black hover:dark:text-white transition duration-200"
+ >
+
+ Clear
+
+
-
- {file.fileName.length > 25
- ? file.fileName.replace(/\.\w+$/, '').substring(0, 25) +
- '...' +
- file.fileExtension
- : file.fileName}
-
-
- ))}
-
-
-
-
+
+
+ {files.map((file, i) => (
+
+
+
+
+
+ {file.fileName.length > 25
+ ? file.fileName
+ .replace(/\.\w+$/, '')
+ .substring(0, 25) +
+ '...' +
+ file.fileExtension
+ : file.fileName}
+
+
+ ))}
+
+
+
+ )}
+
+ >
+ )}
) : (
{
const [providers, setProviders] = useState([]);
@@ -79,119 +75,127 @@ const ModelSelector = () => {
return (
-
-
-
-
-
-
-
-
-
- setSearchQuery(e.target.value)}
- className="w-full pl-9 pr-3 py-2 bg-light-secondary dark:bg-dark-secondary rounded-lg placeholder:text-sm text-sm text-black dark:text-white placeholder:text-black/40 dark:placeholder:text-white/40 focus:outline-none focus:ring-2 focus:ring-sky-500/20 border border-transparent focus:border-sky-500/30 transition duration-200"
- />
-
-
+ {({ open }) => (
+ <>
+
+
+
+
+ {open && (
+
+
+
+
+
+ setSearchQuery(e.target.value)}
+ className="w-full pl-8 pr-3 py-2 bg-light-secondary dark:bg-dark-secondary rounded-lg placeholder:text-xs placeholder:-translate-y-[1.5px] text-xs text-black dark:text-white placeholder:text-black/40 dark:placeholder:text-white/40 focus:outline-none border border-transparent transition duration-200"
+ />
+
+
-
- {isLoading ? (
-
-
-
- ) : filteredProviders.length === 0 ? (
-
- {searchQuery
- ? 'No models found'
- : 'No chat models configured'}
-
- ) : (
-
- {filteredProviders.map((provider, providerIndex) => (
-
-
-
- {provider.name}
-
+
+ {isLoading ? (
+
+
-
-
- {provider.chatModels.map((model) => (
-
- handleModelSelect(provider.id, model.key)
- }
- type="button"
- className={cn(
- 'px-3 py-2 flex items-center justify-between text-start duration-200 cursor-pointer transition rounded-lg group',
- chatModelProvider?.providerId === provider.id &&
- chatModelProvider?.key === model.key
- ? 'bg-light-secondary dark:bg-dark-secondary'
- : 'hover:bg-light-secondary dark:hover:bg-dark-secondary',
- )}
- >
-
-
-
- {model.name}
+ ) : filteredProviders.length === 0 ? (
+
+ {searchQuery
+ ? 'No models found'
+ : 'No chat models configured'}
+
+ ) : (
+
+ {filteredProviders.map((provider, providerIndex) => (
+
+
-
+
+
+ {provider.chatModels.map((model) => (
+
+ handleModelSelect(provider.id, model.key)
+ }
+ type="button"
+ className={cn(
+ 'px-3 py-2 flex items-center justify-between text-start duration-200 cursor-pointer transition rounded-lg group',
+ chatModelProvider?.providerId ===
+ provider.id &&
+ chatModelProvider?.key === model.key
+ ? 'bg-light-secondary dark:bg-dark-secondary'
+ : 'hover:bg-light-secondary dark:hover:bg-dark-secondary',
+ )}
+ >
+
+
+ ))}
+
+
+ {providerIndex < filteredProviders.length - 1 && (
+
+ )}
+
))}
-
- {providerIndex < filteredProviders.length - 1 && (
-
- )}
-
- ))}
-
- )}
-
-
-
-
+ )}
+
+
+
+ )}
+
+ >
+ )}
);
};
diff --git a/src/components/MessageInputActions/Copilot.tsx b/src/components/MessageInputActions/Copilot.tsx
deleted file mode 100644
index 5a3e476..0000000
--- a/src/components/MessageInputActions/Copilot.tsx
+++ /dev/null
@@ -1,43 +0,0 @@
-import { cn } from '@/lib/utils';
-import { Switch } from '@headlessui/react';
-
-const CopilotToggle = ({
- copilotEnabled,
- setCopilotEnabled,
-}: {
- copilotEnabled: boolean;
- setCopilotEnabled: (enabled: boolean) => void;
-}) => {
- return (
-
-
- Copilot
-
-
-
setCopilotEnabled(!copilotEnabled)}
- className={cn(
- 'text-xs font-medium transition-colors duration-150 ease-in-out',
- copilotEnabled
- ? 'text-[#24A0ED]'
- : 'text-black/50 dark:text-white/50 group-hover:text-black dark:group-hover:text-white',
- )}
- >
- Copilot
-
-
- );
-};
-
-export default CopilotToggle;
diff --git a/src/components/MessageInputActions/Focus.tsx b/src/components/MessageInputActions/Focus.tsx
deleted file mode 100644
index 58b1a39..0000000
--- a/src/components/MessageInputActions/Focus.tsx
+++ /dev/null
@@ -1,123 +0,0 @@
-import {
- BadgePercent,
- ChevronDown,
- Globe,
- Pencil,
- ScanEye,
- SwatchBook,
-} from 'lucide-react';
-import { cn } from '@/lib/utils';
-import {
- Popover,
- PopoverButton,
- PopoverPanel,
- Transition,
-} from '@headlessui/react';
-import { SiReddit, SiYoutube } from '@icons-pack/react-simple-icons';
-import { Fragment } from 'react';
-import { useChat } from '@/lib/hooks/useChat';
-
-const focusModes = [
- {
- key: 'webSearch',
- title: 'All',
- description: 'Searches across all of the internet',
- icon:
,
- },
- {
- key: 'academicSearch',
- title: 'Academic',
- description: 'Search in published academic papers',
- icon:
,
- },
- {
- key: 'writingAssistant',
- title: 'Writing',
- description: 'Chat without searching the web',
- icon:
,
- },
- {
- key: 'wolframAlphaSearch',
- title: 'Wolfram Alpha',
- description: 'Computational knowledge engine',
- icon:
,
- },
- {
- key: 'youtubeSearch',
- title: 'Youtube',
- description: 'Search and watch videos',
- icon:
,
- },
- {
- key: 'redditSearch',
- title: 'Reddit',
- description: 'Search for discussions and opinions',
- icon:
,
- },
-];
-
-const Focus = () => {
- const { focusMode, setFocusMode } = useChat();
-
- return (
-
-
- {focusMode !== 'webSearch' ? (
-
- {focusModes.find((mode) => mode.key === focusMode)?.icon}
-
- ) : (
-
-
-
- )}
-
-
-
-
- {focusModes.map((mode, i) => (
-
setFocusMode(mode.key)}
- key={i}
- className={cn(
- 'p-2 rounded-lg flex flex-col items-start justify-start text-start space-y-2 duration-200 cursor-pointer transition focus:outline-none',
- focusMode === mode.key
- ? 'bg-light-secondary dark:bg-dark-secondary'
- : 'hover:bg-light-secondary dark:hover:bg-dark-secondary',
- )}
- >
-
- {mode.icon}
-
{mode.title}
-
-
- {mode.description}
-
-
- ))}
-
-
-
-
- );
-};
-
-export default Focus;
diff --git a/src/components/MessageInputActions/Optimization.tsx b/src/components/MessageInputActions/Optimization.tsx
index fe04190..2f0cd82 100644
--- a/src/components/MessageInputActions/Optimization.tsx
+++ b/src/components/MessageInputActions/Optimization.tsx
@@ -8,6 +8,7 @@ import {
} from '@headlessui/react';
import { Fragment } from 'react';
import { useChat } from '@/lib/hooks/useChat';
+import { AnimatePresence, motion } from 'motion/react';
const OptimizationModes = [
{
@@ -24,7 +25,7 @@ const OptimizationModes = [
},
{
key: 'quality',
- title: 'Quality (Soon)',
+ title: 'Quality',
description: 'Get the most thorough and accurate answer',
icon: (
{
/>
-
-
-
- {OptimizationModes.map((mode, i) => (
-
setOptimizationMode(mode.key)}
- key={i}
- disabled={mode.key === 'quality'}
- className={cn(
- 'p-2 rounded-lg flex flex-col items-start justify-start text-start space-y-1 duration-200 cursor-pointer transition focus:outline-none',
- optimizationMode === mode.key
- ? 'bg-light-secondary dark:bg-dark-secondary'
- : 'hover:bg-light-secondary dark:hover:bg-dark-secondary',
- mode.key === 'quality' && 'opacity-50 cursor-not-allowed',
- )}
- >
-
- {mode.icon}
-
{mode.title}
-
-
- {mode.description}
-
-
- ))}
-
-
-
+
+ {open && (
+
+
+ {OptimizationModes.map((mode, i) => (
+ setOptimizationMode(mode.key)}
+ key={i}
+ className={cn(
+ 'p-2 rounded-lg flex flex-col items-start justify-start text-start space-y-1 duration-200 cursor-pointer transition focus:outline-none',
+ optimizationMode === mode.key
+ ? 'bg-light-secondary dark:bg-dark-secondary'
+ : 'hover:bg-light-secondary dark:hover:bg-dark-secondary',
+ )}
+ >
+
+
+ {mode.icon}
+
{mode.title}
+
+ {mode.key === 'quality' && (
+
+ Beta
+
+ )}
+
+
+ {mode.description}
+
+
+ ))}
+
+
+ )}
+
>
)}
diff --git a/src/components/MessageInputActions/Sources.tsx b/src/components/MessageInputActions/Sources.tsx
new file mode 100644
index 0000000..2652d58
--- /dev/null
+++ b/src/components/MessageInputActions/Sources.tsx
@@ -0,0 +1,93 @@
+import { useChat } from '@/lib/hooks/useChat';
+import {
+ Popover,
+ PopoverButton,
+ PopoverPanel,
+ Switch,
+} from '@headlessui/react';
+import {
+ GlobeIcon,
+ GraduationCapIcon,
+ NetworkIcon,
+} from '@phosphor-icons/react';
+import { AnimatePresence, motion } from 'motion/react';
+
+const sourcesList = [
+ {
+ name: 'Web',
+ key: 'web',
+ icon:
,
+ },
+ {
+ name: 'Academic',
+ key: 'academic',
+ icon:
,
+ },
+ {
+ name: 'Social',
+ key: 'discussions',
+ icon:
,
+ },
+];
+
+const Sources = () => {
+ const { sources, setSources } = useChat();
+
+ return (
+
+ {({ open }) => (
+ <>
+
+
+
+
+ {open && (
+
+
+ {sourcesList.map((source, i) => (
+ {
+ if (!sources.includes(source.key)) {
+ setSources([...sources, source.key]);
+ } else {
+ setSources(sources.filter((s) => s !== source.key));
+ }
+ }}
+ >
+
+ {source.icon}
+
{source.name}
+
+
+
+
+
+ ))}
+
+
+ )}
+
+ >
+ )}
+
+ );
+};
+
+export default Sources;
diff --git a/src/components/Citation.tsx b/src/components/MessageRenderer/Citation.tsx
similarity index 100%
rename from src/components/Citation.tsx
rename to src/components/MessageRenderer/Citation.tsx
diff --git a/src/components/MessageRenderer/CodeBlock/CodeBlockDarkTheme.ts b/src/components/MessageRenderer/CodeBlock/CodeBlockDarkTheme.ts
new file mode 100644
index 0000000..0a9d6a4
--- /dev/null
+++ b/src/components/MessageRenderer/CodeBlock/CodeBlockDarkTheme.ts
@@ -0,0 +1,102 @@
+import type { CSSProperties } from 'react';
+
+const darkTheme = {
+ 'hljs-comment': {
+ color: '#8b949e',
+ },
+ 'hljs-quote': {
+ color: '#8b949e',
+ },
+ 'hljs-variable': {
+ color: '#ff7b72',
+ },
+ 'hljs-template-variable': {
+ color: '#ff7b72',
+ },
+ 'hljs-tag': {
+ color: '#ff7b72',
+ },
+ 'hljs-name': {
+ color: '#ff7b72',
+ },
+ 'hljs-selector-id': {
+ color: '#ff7b72',
+ },
+ 'hljs-selector-class': {
+ color: '#ff7b72',
+ },
+ 'hljs-regexp': {
+ color: '#ff7b72',
+ },
+ 'hljs-deletion': {
+ color: '#ff7b72',
+ },
+ 'hljs-number': {
+ color: '#f2cc60',
+ },
+ 'hljs-built_in': {
+ color: '#f2cc60',
+ },
+ 'hljs-builtin-name': {
+ color: '#f2cc60',
+ },
+ 'hljs-literal': {
+ color: '#f2cc60',
+ },
+ 'hljs-type': {
+ color: '#f2cc60',
+ },
+ 'hljs-params': {
+ color: '#f2cc60',
+ },
+ 'hljs-meta': {
+ color: '#f2cc60',
+ },
+ 'hljs-link': {
+ color: '#f2cc60',
+ },
+ 'hljs-attribute': {
+ color: '#58a6ff',
+ },
+ 'hljs-string': {
+ color: '#7ee787',
+ },
+ 'hljs-symbol': {
+ color: '#7ee787',
+ },
+ 'hljs-bullet': {
+ color: '#7ee787',
+ },
+ 'hljs-addition': {
+ color: '#7ee787',
+ },
+ 'hljs-title': {
+ color: '#79c0ff',
+ },
+ 'hljs-section': {
+ color: '#79c0ff',
+ },
+ 'hljs-keyword': {
+ color: '#c297ff',
+ },
+ 'hljs-selector-tag': {
+ color: '#c297ff',
+ },
+ hljs: {
+ display: 'block',
+ overflowX: 'auto',
+ background: '#0d1117',
+ color: '#c9d1d9',
+ padding: '0.75em',
+ border: '1px solid #21262d',
+ borderRadius: '10px',
+ },
+ 'hljs-emphasis': {
+ fontStyle: 'italic',
+ },
+ 'hljs-strong': {
+ fontWeight: 'bold',
+ },
+} satisfies Record
;
+
+export default darkTheme;
diff --git a/src/components/MessageRenderer/CodeBlock/CodeBlockLightTheme.ts b/src/components/MessageRenderer/CodeBlock/CodeBlockLightTheme.ts
new file mode 100644
index 0000000..758dbac
--- /dev/null
+++ b/src/components/MessageRenderer/CodeBlock/CodeBlockLightTheme.ts
@@ -0,0 +1,102 @@
+import type { CSSProperties } from 'react';
+
+const lightTheme = {
+ 'hljs-comment': {
+ color: '#6e7781',
+ },
+ 'hljs-quote': {
+ color: '#6e7781',
+ },
+ 'hljs-variable': {
+ color: '#d73a49',
+ },
+ 'hljs-template-variable': {
+ color: '#d73a49',
+ },
+ 'hljs-tag': {
+ color: '#d73a49',
+ },
+ 'hljs-name': {
+ color: '#d73a49',
+ },
+ 'hljs-selector-id': {
+ color: '#d73a49',
+ },
+ 'hljs-selector-class': {
+ color: '#d73a49',
+ },
+ 'hljs-regexp': {
+ color: '#d73a49',
+ },
+ 'hljs-deletion': {
+ color: '#d73a49',
+ },
+ 'hljs-number': {
+ color: '#b08800',
+ },
+ 'hljs-built_in': {
+ color: '#b08800',
+ },
+ 'hljs-builtin-name': {
+ color: '#b08800',
+ },
+ 'hljs-literal': {
+ color: '#b08800',
+ },
+ 'hljs-type': {
+ color: '#b08800',
+ },
+ 'hljs-params': {
+ color: '#b08800',
+ },
+ 'hljs-meta': {
+ color: '#b08800',
+ },
+ 'hljs-link': {
+ color: '#b08800',
+ },
+ 'hljs-attribute': {
+ color: '#0a64ae',
+ },
+ 'hljs-string': {
+ color: '#22863a',
+ },
+ 'hljs-symbol': {
+ color: '#22863a',
+ },
+ 'hljs-bullet': {
+ color: '#22863a',
+ },
+ 'hljs-addition': {
+ color: '#22863a',
+ },
+ 'hljs-title': {
+ color: '#005cc5',
+ },
+ 'hljs-section': {
+ color: '#005cc5',
+ },
+ 'hljs-keyword': {
+ color: '#6f42c1',
+ },
+ 'hljs-selector-tag': {
+ color: '#6f42c1',
+ },
+ hljs: {
+ display: 'block',
+ overflowX: 'auto',
+ background: '#ffffff',
+ color: '#24292f',
+ padding: '0.75em',
+ border: '1px solid #e8edf1',
+ borderRadius: '10px',
+ },
+ 'hljs-emphasis': {
+ fontStyle: 'italic',
+ },
+ 'hljs-strong': {
+ fontWeight: 'bold',
+ },
+} satisfies Record;
+
+export default lightTheme;
diff --git a/src/components/MessageRenderer/CodeBlock/index.tsx b/src/components/MessageRenderer/CodeBlock/index.tsx
new file mode 100644
index 0000000..493a0d0
--- /dev/null
+++ b/src/components/MessageRenderer/CodeBlock/index.tsx
@@ -0,0 +1,64 @@
+'use client';
+
+import { CheckIcon, CopyIcon } from '@phosphor-icons/react';
+import React, { useEffect, useMemo, useState } from 'react';
+import { useTheme } from 'next-themes';
+import SyntaxHighlighter from 'react-syntax-highlighter';
+import darkTheme from './CodeBlockDarkTheme';
+import lightTheme from './CodeBlockLightTheme';
+
+const CodeBlock = ({
+ language,
+ children,
+}: {
+ language: string;
+ children: React.ReactNode;
+}) => {
+ const { resolvedTheme } = useTheme();
+ const [mounted, setMounted] = useState(false);
+
+ const [copied, setCopied] = useState(false);
+
+ useEffect(() => {
+ setMounted(true);
+ }, []);
+
+ const syntaxTheme = useMemo(() => {
+ if (!mounted) return lightTheme;
+ return resolvedTheme === 'dark' ? darkTheme : lightTheme;
+ }, [mounted, resolvedTheme]);
+
+ return (
+
+ {
+ navigator.clipboard.writeText(children as string);
+ setCopied(true);
+ setTimeout(() => setCopied(false), 2000);
+ }}
+ >
+ {copied ? (
+
+ ) : (
+
+ )}
+
+
+ {children as string}
+
+
+ );
+};
+
+export default CodeBlock;
diff --git a/src/components/MessageSources.tsx b/src/components/MessageSources.tsx
index fb2b5bb..a1db27a 100644
--- a/src/components/MessageSources.tsx
+++ b/src/components/MessageSources.tsx
@@ -6,11 +6,11 @@ import {
Transition,
TransitionChild,
} from '@headlessui/react';
-import { Document } from '@langchain/core/documents';
import { File } from 'lucide-react';
import { Fragment, useState } from 'react';
+import { Chunk } from '@/lib/types';
-const MessageSources = ({ sources }: { sources: Document[] }) => {
+const MessageSources = ({ sources }: { sources: Chunk[] }) => {
const [isDialogOpen, setIsDialogOpen] = useState(false);
const closeModal = () => {
@@ -37,7 +37,7 @@ const MessageSources = ({ sources }: { sources: Document[] }) => {
- {source.metadata.url === 'File' ? (
+ {source.metadata.url.includes('file_id://') ? (
@@ -51,7 +51,9 @@ const MessageSources = ({ sources }: { sources: Document[] }) => {
/>
)}
- {source.metadata.url.replace(/.+\/\/|www.|\..+/g, '')}
+ {source.metadata.url.includes('file_id://')
+ ? 'Uploaded File'
+ : source.metadata.url.replace(/.+\/\/|www.|\..+/g, '')}
diff --git a/src/components/Navbar.tsx b/src/components/Navbar.tsx
index bbcd470..6d3e77c 100644
--- a/src/components/Navbar.tsx
+++ b/src/components/Navbar.tsx
@@ -11,6 +11,7 @@ import {
} from '@headlessui/react';
import jsPDF from 'jspdf';
import { useChat, Section } from '@/lib/hooks/useChat';
+import { SourceBlock } from '@/lib/types';
const downloadFile = (filename: string, content: string, type: string) => {
const blob = new Blob([content], { type });
@@ -28,35 +29,41 @@ const downloadFile = (filename: string, content: string, type: string) => {
const exportAsMarkdown = (sections: Section[], title: string) => {
const date = new Date(
- sections[0]?.userMessage?.createdAt || Date.now(),
+ sections[0].message.createdAt || Date.now(),
).toLocaleString();
let md = `# 💬 Chat Export: ${title}\n\n`;
md += `*Exported on: ${date}*\n\n---\n`;
sections.forEach((section, idx) => {
- if (section.userMessage) {
- md += `\n---\n`;
- md += `**🧑 User**
+ md += `\n---\n`;
+ md += `**🧑 User**
`;
- md += `*${new Date(section.userMessage.createdAt).toLocaleString()}*\n\n`;
- md += `> ${section.userMessage.content.replace(/\n/g, '\n> ')}\n`;
- }
+ md += `*${new Date(section.message.createdAt).toLocaleString()}*\n\n`;
+ md += `> ${section.message.query.replace(/\n/g, '\n> ')}\n`;
- if (section.assistantMessage) {
+ if (section.message.responseBlocks.length > 0) {
md += `\n---\n`;
md += `**🤖 Assistant**
`;
- md += `*${new Date(section.assistantMessage.createdAt).toLocaleString()}*\n\n`;
- md += `> ${section.assistantMessage.content.replace(/\n/g, '\n> ')}\n`;
+ md += `*${new Date(section.message.createdAt).toLocaleString()}*\n\n`;
+ md += `> ${section.message.responseBlocks
+ .filter((b) => b.type === 'text')
+ .map((block) => block.data)
+ .join('\n')
+ .replace(/\n/g, '\n> ')}\n`;
}
+ const sourceResponseBlock = section.message.responseBlocks.find(
+ (block) => block.type === 'source',
+ ) as SourceBlock | undefined;
+
if (
- section.sourceMessage &&
- section.sourceMessage.sources &&
- section.sourceMessage.sources.length > 0
+ sourceResponseBlock &&
+ sourceResponseBlock.data &&
+ sourceResponseBlock.data.length > 0
) {
md += `\n**Citations:**\n`;
- section.sourceMessage.sources.forEach((src: any, i: number) => {
+ sourceResponseBlock.data.forEach((src: any, i: number) => {
const url = src.metadata?.url || '';
md += `- [${i + 1}] [${url}](${url})\n`;
});
@@ -69,7 +76,7 @@ const exportAsMarkdown = (sections: Section[], title: string) => {
const exportAsPDF = (sections: Section[], title: string) => {
const doc = new jsPDF();
const date = new Date(
- sections[0]?.userMessage?.createdAt || Date.now(),
+ sections[0]?.message?.createdAt || Date.now(),
).toLocaleString();
let y = 15;
const pageHeight = doc.internal.pageSize.height;
@@ -86,44 +93,38 @@ const exportAsPDF = (sections: Section[], title: string) => {
doc.setTextColor(30);
sections.forEach((section, idx) => {
- if (section.userMessage) {
- if (y > pageHeight - 30) {
- doc.addPage();
- y = 15;
- }
- doc.setFont('helvetica', 'bold');
- doc.text('User', 10, y);
- doc.setFont('helvetica', 'normal');
- doc.setFontSize(10);
- doc.setTextColor(120);
- doc.text(
- `${new Date(section.userMessage.createdAt).toLocaleString()}`,
- 40,
- y,
- );
- y += 6;
- doc.setTextColor(30);
- doc.setFontSize(12);
- const userLines = doc.splitTextToSize(section.userMessage.content, 180);
- for (let i = 0; i < userLines.length; i++) {
- if (y > pageHeight - 20) {
- doc.addPage();
- y = 15;
- }
- doc.text(userLines[i], 12, y);
- y += 6;
- }
- y += 6;
- doc.setDrawColor(230);
- if (y > pageHeight - 10) {
- doc.addPage();
- y = 15;
- }
- doc.line(10, y, 200, y);
- y += 4;
+ if (y > pageHeight - 30) {
+ doc.addPage();
+ y = 15;
}
+ doc.setFont('helvetica', 'bold');
+ doc.text('User', 10, y);
+ doc.setFont('helvetica', 'normal');
+ doc.setFontSize(10);
+ doc.setTextColor(120);
+ doc.text(`${new Date(section.message.createdAt).toLocaleString()}`, 40, y);
+ y += 6;
+ doc.setTextColor(30);
+ doc.setFontSize(12);
+ const userLines = doc.splitTextToSize(section.message.query, 180);
+ for (let i = 0; i < userLines.length; i++) {
+ if (y > pageHeight - 20) {
+ doc.addPage();
+ y = 15;
+ }
+ doc.text(userLines[i], 12, y);
+ y += 6;
+ }
+ y += 6;
+ doc.setDrawColor(230);
+ if (y > pageHeight - 10) {
+ doc.addPage();
+ y = 15;
+ }
+ doc.line(10, y, 200, y);
+ y += 4;
- if (section.assistantMessage) {
+ if (section.message.responseBlocks.length > 0) {
if (y > pageHeight - 30) {
doc.addPage();
y = 15;
@@ -134,7 +135,7 @@ const exportAsPDF = (sections: Section[], title: string) => {
doc.setFontSize(10);
doc.setTextColor(120);
doc.text(
- `${new Date(section.assistantMessage.createdAt).toLocaleString()}`,
+ `${new Date(section.message.createdAt).toLocaleString()}`,
40,
y,
);
@@ -142,7 +143,7 @@ const exportAsPDF = (sections: Section[], title: string) => {
doc.setTextColor(30);
doc.setFontSize(12);
const assistantLines = doc.splitTextToSize(
- section.assistantMessage.content,
+ section.parsedTextBlocks.join('\n'),
180,
);
for (let i = 0; i < assistantLines.length; i++) {
@@ -154,10 +155,14 @@ const exportAsPDF = (sections: Section[], title: string) => {
y += 6;
}
+ const sourceResponseBlock = section.message.responseBlocks.find(
+ (block) => block.type === 'source',
+ ) as SourceBlock | undefined;
+
if (
- section.sourceMessage &&
- section.sourceMessage.sources &&
- section.sourceMessage.sources.length > 0
+ sourceResponseBlock &&
+ sourceResponseBlock.data &&
+ sourceResponseBlock.data.length > 0
) {
doc.setFontSize(11);
doc.setTextColor(80);
@@ -167,7 +172,7 @@ const exportAsPDF = (sections: Section[], title: string) => {
}
doc.text('Citations:', 12, y);
y += 5;
- section.sourceMessage.sources.forEach((src: any, i: number) => {
+ sourceResponseBlock.data.forEach((src: any, i: number) => {
const url = src.metadata?.url || '';
if (y > pageHeight - 15) {
doc.addPage();
@@ -198,15 +203,16 @@ const Navbar = () => {
const { sections, chatId } = useChat();
useEffect(() => {
- if (sections.length > 0 && sections[0].userMessage) {
+ if (sections.length > 0 && sections[0].message) {
const newTitle =
- sections[0].userMessage.content.length > 20
- ? `${sections[0].userMessage.content.substring(0, 20).trim()}...`
- : sections[0].userMessage.content;
+ sections[0].message.query.length > 30
+ ? `${sections[0].message.query.substring(0, 30).trim()}...`
+ : sections[0].message.query || 'New Conversation';
+
setTitle(newTitle);
const newTimeAgo = formatTimeDifference(
new Date(),
- sections[0].userMessage.createdAt,
+ sections[0].message.createdAt,
);
setTimeAgo(newTimeAgo);
}
@@ -214,10 +220,10 @@ const Navbar = () => {
useEffect(() => {
const intervalId = setInterval(() => {
- if (sections.length > 0 && sections[0].userMessage) {
+ if (sections.length > 0 && sections[0].message) {
const newTimeAgo = formatTimeDifference(
new Date(),
- sections[0].userMessage.createdAt,
+ sections[0].message.createdAt,
);
setTimeAgo(newTimeAgo);
}
diff --git a/src/components/Settings/SettingsDialogue.tsx b/src/components/Settings/SettingsDialogue.tsx
index ba097a9..f42ce9c 100644
--- a/src/components/Settings/SettingsDialogue.tsx
+++ b/src/components/Settings/SettingsDialogue.tsx
@@ -3,6 +3,7 @@ import {
ArrowLeft,
BrainCog,
ChevronLeft,
+ ExternalLink,
Search,
Sliders,
ToggleRight,
@@ -115,35 +116,52 @@ const SettingsDialogue = ({
) : (
-
-
setIsOpen(false)}
- className="group flex flex-row items-center hover:bg-light-200 hover:dark:bg-dark-200 p-2 rounded-lg"
- >
-
-
- Back
+
+
+
setIsOpen(false)}
+ className="group flex flex-row items-center hover:bg-light-200 hover:dark:bg-dark-200 p-2 rounded-lg"
+ >
+
+
+ Back
+
+
+
+
+ {sections.map((section) => (
+
setActiveSection(section.key)}
+ >
+
+ {section.name}
+
+ ))}
+
+
+
+
+ Version: {process.env.NEXT_PUBLIC_VERSION}
-
-
- {sections.map((section) => (
-
setActiveSection(section.key)}
- >
-
- {section.name}
-
- ))}
+
+ GitHub
+
+
diff --git a/src/components/Settings/SettingsField.tsx b/src/components/Settings/SettingsField.tsx
index 55aa640..447ce1c 100644
--- a/src/components/Settings/SettingsField.tsx
+++ b/src/components/Settings/SettingsField.tsx
@@ -12,6 +12,12 @@ import { useTheme } from 'next-themes';
import { Loader2 } from 'lucide-react';
import { Switch } from '@headlessui/react';
+const emitClientConfigChanged = () => {
+ if (typeof window !== 'undefined') {
+ window.dispatchEvent(new Event('client-config-changed'));
+ }
+};
+
const SettingsSelect = ({
field,
value,
@@ -35,6 +41,7 @@ const SettingsSelect = ({
if (field.key === 'theme') {
setTheme(newValue);
}
+ emitClientConfigChanged();
} else {
const res = await fetch('/api/config', {
method: 'POST',
@@ -106,6 +113,7 @@ const SettingsInput = ({
try {
if (field.scope === 'client') {
localStorage.setItem(field.key, newValue);
+ emitClientConfigChanged();
} else {
const res = await fetch('/api/config', {
method: 'POST',
@@ -182,6 +190,7 @@ const SettingsTextarea = ({
try {
if (field.scope === 'client') {
localStorage.setItem(field.key, newValue);
+ emitClientConfigChanged();
} else {
const res = await fetch('/api/config', {
method: 'POST',
@@ -258,6 +267,7 @@ const SettingsSwitch = ({
try {
if (field.scope === 'client') {
localStorage.setItem(field.key, String(newValue));
+ emitClientConfigChanged();
} else {
const res = await fetch('/api/config', {
method: 'POST',
@@ -300,7 +310,7 @@ const SettingsSwitch = ({
checked={isChecked}
onChange={handleSave}
disabled={loading}
- className="group relative flex h-6 w-12 shrink-0 cursor-pointer rounded-full bg-white/10 p-1 duration-200 ease-in-out focus:outline-none transition-colors disabled:opacity-60 disabled:cursor-not-allowed data-[checked]:bg-sky-500"
+ className="group relative flex h-6 w-12 shrink-0 cursor-pointer rounded-full bg-light-200 dark:bg-white/10 p-1 duration-200 ease-in-out focus:outline-none transition-colors disabled:opacity-60 disabled:cursor-not-allowed data-[checked]:bg-sky-500 dark:data-[checked]:bg-sky-500"
>
{
+ return (
+
+
+
+
+
+
+ Expression
+
+
+
+
+ {expression}
+
+
+
+
+
+
+
+
+ Result
+
+
+
+
+ {result.toLocaleString()}
+
+
+
+
+
+ );
+};
+
+export default Calculation;
diff --git a/src/components/Widgets/Renderer.tsx b/src/components/Widgets/Renderer.tsx
new file mode 100644
index 0000000..8456c8f
--- /dev/null
+++ b/src/components/Widgets/Renderer.tsx
@@ -0,0 +1,76 @@
+import React from 'react';
+import { Widget } from '../ChatWindow';
+import Weather from './Weather';
+import Calculation from './Calculation';
+import Stock from './Stock';
+
+const Renderer = ({ widgets }: { widgets: Widget[] }) => {
+ return widgets.map((widget, index) => {
+ switch (widget.widgetType) {
+ case 'weather':
+ return (
+
+ );
+ case 'calculation_result':
+ return (
+
+ );
+ case 'stock':
+ return (
+
+ );
+ default:
+ return Unknown widget type: {widget.widgetType}
;
+ }
+ });
+};
+
+export default Renderer;
diff --git a/src/components/Widgets/Stock.tsx b/src/components/Widgets/Stock.tsx
new file mode 100644
index 0000000..57fba1a
--- /dev/null
+++ b/src/components/Widgets/Stock.tsx
@@ -0,0 +1,517 @@
+'use client';
+
+import { Clock, ArrowUpRight, ArrowDownRight, Minus } from 'lucide-react';
+import { useEffect, useRef, useState } from 'react';
+import {
+ createChart,
+ ColorType,
+ LineStyle,
+ BaselineSeries,
+ LineSeries,
+} from 'lightweight-charts';
+
+type StockWidgetProps = {
+ symbol: string;
+ shortName: string;
+ longName?: string;
+ exchange?: string;
+ currency?: string;
+ marketState?: string;
+ regularMarketPrice?: number;
+ regularMarketChange?: number;
+ regularMarketChangePercent?: number;
+ regularMarketPreviousClose?: number;
+ regularMarketOpen?: number;
+ regularMarketDayHigh?: number;
+ regularMarketDayLow?: number;
+ regularMarketVolume?: number;
+ averageDailyVolume3Month?: number;
+ marketCap?: number;
+ fiftyTwoWeekLow?: number;
+ fiftyTwoWeekHigh?: number;
+ trailingPE?: number;
+ forwardPE?: number;
+ dividendYield?: number;
+ earningsPerShare?: number;
+ website?: string;
+ postMarketPrice?: number;
+ postMarketChange?: number;
+ postMarketChangePercent?: number;
+ preMarketPrice?: number;
+ preMarketChange?: number;
+ preMarketChangePercent?: number;
+ chartData?: {
+ '1D'?: { timestamps: number[]; prices: number[] } | null;
+ '5D'?: { timestamps: number[]; prices: number[] } | null;
+ '1M'?: { timestamps: number[]; prices: number[] } | null;
+ '3M'?: { timestamps: number[]; prices: number[] } | null;
+ '6M'?: { timestamps: number[]; prices: number[] } | null;
+ '1Y'?: { timestamps: number[]; prices: number[] } | null;
+ MAX?: { timestamps: number[]; prices: number[] } | null;
+ } | null;
+ comparisonData?: Array<{
+ ticker: string;
+ name: string;
+ chartData: {
+ '1D'?: { timestamps: number[]; prices: number[] } | null;
+ '5D'?: { timestamps: number[]; prices: number[] } | null;
+ '1M'?: { timestamps: number[]; prices: number[] } | null;
+ '3M'?: { timestamps: number[]; prices: number[] } | null;
+ '6M'?: { timestamps: number[]; prices: number[] } | null;
+ '1Y'?: { timestamps: number[]; prices: number[] } | null;
+ MAX?: { timestamps: number[]; prices: number[] } | null;
+ };
+ }> | null;
+ error?: string;
+};
+
+const formatNumber = (num: number | undefined, decimals = 2): string => {
+ if (num === undefined || num === null) return 'N/A';
+ return num.toLocaleString(undefined, {
+ minimumFractionDigits: decimals,
+ maximumFractionDigits: decimals,
+ });
+};
+
+const formatLargeNumber = (num: number | undefined): string => {
+ if (num === undefined || num === null) return 'N/A';
+ if (num >= 1e12) return `$${(num / 1e12).toFixed(2)}T`;
+ if (num >= 1e9) return `$${(num / 1e9).toFixed(2)}B`;
+ if (num >= 1e6) return `$${(num / 1e6).toFixed(2)}M`;
+ if (num >= 1e3) return `$${(num / 1e3).toFixed(2)}K`;
+ return `$${num.toFixed(2)}`;
+};
+
+const Stock = (props: StockWidgetProps) => {
+ const [isDarkMode, setIsDarkMode] = useState(false);
+ const [selectedTimeframe, setSelectedTimeframe] = useState<
+ '1D' | '5D' | '1M' | '3M' | '6M' | '1Y' | 'MAX'
+ >('1M');
+ const chartContainerRef = useRef(null);
+
+ useEffect(() => {
+ const checkDarkMode = () => {
+ setIsDarkMode(document.documentElement.classList.contains('dark'));
+ };
+
+ checkDarkMode();
+
+ const observer = new MutationObserver(checkDarkMode);
+ observer.observe(document.documentElement, {
+ attributes: true,
+ attributeFilter: ['class'],
+ });
+
+ return () => observer.disconnect();
+ }, []);
+
+ useEffect(() => {
+ const currentChartData = props.chartData?.[selectedTimeframe];
+ if (
+ !chartContainerRef.current ||
+ !currentChartData ||
+ currentChartData.timestamps.length === 0
+ ) {
+ return;
+ }
+
+ const chart = createChart(chartContainerRef.current, {
+ width: chartContainerRef.current.clientWidth,
+ height: 280,
+ layout: {
+ background: { type: ColorType.Solid, color: 'transparent' },
+ textColor: isDarkMode ? '#6b7280' : '#9ca3af',
+ fontSize: 11,
+ attributionLogo: false,
+ },
+ grid: {
+ vertLines: {
+ color: isDarkMode ? '#21262d' : '#e8edf1',
+ style: LineStyle.Solid,
+ },
+ horzLines: {
+ color: isDarkMode ? '#21262d' : '#e8edf1',
+ style: LineStyle.Solid,
+ },
+ },
+ crosshair: {
+ vertLine: {
+ color: isDarkMode ? '#30363d' : '#d0d7de',
+ labelVisible: false,
+ },
+ horzLine: {
+ color: isDarkMode ? '#30363d' : '#d0d7de',
+ labelVisible: true,
+ },
+ },
+ rightPriceScale: {
+ borderVisible: false,
+ visible: false,
+ },
+ leftPriceScale: {
+ borderVisible: false,
+ visible: true,
+ },
+ timeScale: {
+ borderVisible: false,
+ timeVisible: false,
+ },
+ handleScroll: false,
+ handleScale: false,
+ });
+
+ const prices = currentChartData.prices;
+ let baselinePrice: number;
+
+ if (selectedTimeframe === '1D') {
+ baselinePrice = props.regularMarketPreviousClose ?? prices[0];
+ } else {
+ baselinePrice = prices[0];
+ }
+
+ const baselineSeries = chart.addSeries(BaselineSeries);
+
+ baselineSeries.applyOptions({
+ baseValue: { type: 'price', price: baselinePrice },
+ topLineColor: isDarkMode ? '#14b8a6' : '#0d9488',
+ topFillColor1: isDarkMode
+ ? 'rgba(20, 184, 166, 0.28)'
+ : 'rgba(13, 148, 136, 0.24)',
+ topFillColor2: isDarkMode
+ ? 'rgba(20, 184, 166, 0.05)'
+ : 'rgba(13, 148, 136, 0.05)',
+ bottomLineColor: isDarkMode ? '#f87171' : '#dc2626',
+ bottomFillColor1: isDarkMode
+ ? 'rgba(248, 113, 113, 0.05)'
+ : 'rgba(220, 38, 38, 0.05)',
+ bottomFillColor2: isDarkMode
+ ? 'rgba(248, 113, 113, 0.28)'
+ : 'rgba(220, 38, 38, 0.24)',
+ lineWidth: 2,
+ crosshairMarkerVisible: true,
+ crosshairMarkerRadius: 4,
+ crosshairMarkerBorderColor: '',
+ crosshairMarkerBackgroundColor: '',
+ });
+
+ const data = currentChartData.timestamps.map((timestamp, index) => {
+ const price = currentChartData.prices[index];
+ return {
+ time: (timestamp / 1000) as any,
+ value: price,
+ };
+ });
+
+ baselineSeries.setData(data);
+
+ const comparisonColors = ['#8b5cf6', '#f59e0b', '#ec4899'];
+ if (props.comparisonData && props.comparisonData.length > 0) {
+ props.comparisonData.forEach((comp, index) => {
+ const compChartData = comp.chartData[selectedTimeframe];
+ if (compChartData && compChartData.prices.length > 0) {
+ const compData = compChartData.timestamps.map((timestamp, i) => ({
+ time: (timestamp / 1000) as any,
+ value: compChartData.prices[i],
+ }));
+
+ const compSeries = chart.addSeries(LineSeries);
+ compSeries.applyOptions({
+ color: comparisonColors[index] || '#6b7280',
+ lineWidth: 2,
+ crosshairMarkerVisible: true,
+ crosshairMarkerRadius: 4,
+ priceScaleId: 'left',
+ });
+ compSeries.setData(compData);
+ }
+ });
+ }
+
+ chart.timeScale().fitContent();
+
+ const handleResize = () => {
+ if (chartContainerRef.current) {
+ chart.applyOptions({
+ width: chartContainerRef.current.clientWidth,
+ });
+ }
+ };
+
+ window.addEventListener('resize', handleResize);
+
+ return () => {
+ window.removeEventListener('resize', handleResize);
+ chart.remove();
+ };
+ }, [
+ props.chartData,
+ props.comparisonData,
+ selectedTimeframe,
+ isDarkMode,
+ props.regularMarketPreviousClose,
+ ]);
+
+ const isPositive = (props.regularMarketChange ?? 0) >= 0;
+ const isMarketOpen = props.marketState === 'REGULAR';
+ const isPreMarket = props.marketState === 'PRE';
+ const isPostMarket = props.marketState === 'POST';
+
+ const displayPrice = isPostMarket
+ ? props.postMarketPrice ?? props.regularMarketPrice
+ : isPreMarket
+ ? props.preMarketPrice ?? props.regularMarketPrice
+ : props.regularMarketPrice;
+
+ const displayChange = isPostMarket
+ ? props.postMarketChange ?? props.regularMarketChange
+ : isPreMarket
+ ? props.preMarketChange ?? props.regularMarketChange
+ : props.regularMarketChange;
+
+ const displayChangePercent = isPostMarket
+ ? props.postMarketChangePercent ?? props.regularMarketChangePercent
+ : isPreMarket
+ ? props.preMarketChangePercent ?? props.regularMarketChangePercent
+ : props.regularMarketChangePercent;
+
+ const changeColor = isPositive
+ ? 'text-green-600 dark:text-green-400'
+ : 'text-red-600 dark:text-red-400';
+
+ if (props.error) {
+ return (
+
+
+ Error: {props.error}
+
+
+ );
+ }
+
+ return (
+
+
+
+
+
+ {props.website && (
+
{
+ (e.target as HTMLImageElement).style.display = 'none';
+ }}
+ />
+ )}
+
+ {props.symbol}
+
+ {props.exchange && (
+
+ {props.exchange}
+
+ )}
+ {isMarketOpen && (
+
+ )}
+ {isPreMarket && (
+
+
+
+ Pre-Market
+
+
+ )}
+ {isPostMarket && (
+
+
+
+ After Hours
+
+
+ )}
+
+
+ {props.longName || props.shortName}
+
+
+
+
+
+
+ {props.currency === 'USD' ? '$' : ''}
+ {formatNumber(displayPrice)}
+
+
+
+ {isPositive ? (
+
+ ) : displayChange === 0 ? (
+
+ ) : (
+
+ )}
+
+ {displayChange !== undefined && displayChange >= 0 ? '+' : ''}
+ {formatNumber(displayChange)}
+
+
+ (
+ {displayChangePercent !== undefined && displayChangePercent >= 0
+ ? '+'
+ : ''}
+ {formatNumber(displayChangePercent)}%)
+
+
+
+
+
+ {props.chartData && (
+
+
+
+ {(['1D', '5D', '1M', '3M', '6M', '1Y', 'MAX'] as const).map(
+ (timeframe) => (
+ setSelectedTimeframe(timeframe)}
+ disabled={!props.chartData?.[timeframe]}
+ className={`px-3 py-1.5 text-xs font-medium rounded transition-colors ${
+ selectedTimeframe === timeframe
+ ? 'bg-black/10 dark:bg-white/10 text-black dark:text-white'
+ : 'text-black/50 dark:text-white/50 hover:text-black/80 dark:hover:text-white/80'
+ } disabled:opacity-30 disabled:cursor-not-allowed`}
+ >
+ {timeframe}
+
+ ),
+ )}
+
+
+ {props.comparisonData && props.comparisonData.length > 0 && (
+
+
+ {props.symbol}
+
+ {props.comparisonData.map((comp, index) => {
+ const colors = ['#8b5cf6', '#f59e0b', '#ec4899'];
+ return (
+
+ );
+ })}
+
+ )}
+
+
+
+
+
+
+
+ Prev Close
+
+
+ ${formatNumber(props.regularMarketPreviousClose)}
+
+
+
+
+ 52W Range
+
+
+ ${formatNumber(props.fiftyTwoWeekLow, 2)}-$
+ {formatNumber(props.fiftyTwoWeekHigh, 2)}
+
+
+
+
+ Market Cap
+
+
+ {formatLargeNumber(props.marketCap)}
+
+
+
+
+ Open
+
+
+ ${formatNumber(props.regularMarketOpen)}
+
+
+
+
+ P/E Ratio
+
+
+ {props.trailingPE ? formatNumber(props.trailingPE, 2) : 'N/A'}
+
+
+
+
+ Dividend Yield
+
+
+ {props.dividendYield
+ ? `${formatNumber(props.dividendYield * 100, 2)}%`
+ : 'N/A'}
+
+
+
+
+ Day Range
+
+
+ ${formatNumber(props.regularMarketDayLow, 2)}-$
+ {formatNumber(props.regularMarketDayHigh, 2)}
+
+
+
+
+ Volume
+
+
+ {formatLargeNumber(props.regularMarketVolume)}
+
+
+
+
+ EPS
+
+
+ $
+ {props.earningsPerShare
+ ? formatNumber(props.earningsPerShare, 2)
+ : 'N/A'}
+
+
+
+
+ )}
+
+
+ );
+};
+
+export default Stock;
diff --git a/src/components/Widgets/Weather.tsx b/src/components/Widgets/Weather.tsx
new file mode 100644
index 0000000..159c15e
--- /dev/null
+++ b/src/components/Widgets/Weather.tsx
@@ -0,0 +1,422 @@
+'use client';
+
+import { getMeasurementUnit } from '@/lib/config/clientRegistry';
+import { Wind, Droplets, Gauge } from 'lucide-react';
+import { useMemo, useEffect, useState } from 'react';
+
+type WeatherWidgetProps = {
+ location: string;
+ current: {
+ time: string;
+ temperature_2m: number;
+ relative_humidity_2m: number;
+ apparent_temperature: number;
+ is_day: number;
+ precipitation: number;
+ weather_code: number;
+ wind_speed_10m: number;
+ wind_direction_10m: number;
+ wind_gusts_10m?: number;
+ };
+ daily: {
+ time: string[];
+ weather_code: number[];
+ temperature_2m_max: number[];
+ temperature_2m_min: number[];
+ precipitation_probability_max: number[];
+ };
+ timezone: string;
+};
+
+const getWeatherInfo = (code: number, isDay: boolean, isDarkMode: boolean) => {
+ const dayNight = isDay ? 'day' : 'night';
+
+ const weatherMap: Record<
+ number,
+ { icon: string; description: string; gradient: string }
+ > = {
+ 0: {
+ icon: `clear-${dayNight}.svg`,
+ description: 'Clear',
+ gradient: isDarkMode
+ ? isDay
+ ? 'radial-gradient(ellipse 150% 100% at 50% 100%, #E8F1FA, #7A9DBF 35%, #4A7BA8 60%, #2F5A88)'
+ : 'radial-gradient(ellipse 150% 100% at 50% 100%, #5A6A7E, #3E4E63 40%, #2A3544 65%, #1A2230)'
+ : isDay
+ ? 'radial-gradient(ellipse 150% 100% at 50% 100%, #FFFFFF, #DBEAFE 30%, #93C5FD 60%, #60A5FA)'
+ : 'radial-gradient(ellipse 150% 100% at 50% 100%, #7B8694, #475569 45%, #334155 70%, #1E293B)',
+ },
+ 1: {
+ icon: `clear-${dayNight}.svg`,
+ description: 'Mostly Clear',
+ gradient: isDarkMode
+ ? isDay
+ ? 'radial-gradient(ellipse 150% 100% at 50% 100%, #E8F1FA, #7A9DBF 35%, #4A7BA8 60%, #2F5A88)'
+ : 'radial-gradient(ellipse 150% 100% at 50% 100%, #5A6A7E, #3E4E63 40%, #2A3544 65%, #1A2230)'
+ : isDay
+ ? 'radial-gradient(ellipse 150% 100% at 50% 100%, #FFFFFF, #DBEAFE 30%, #93C5FD 60%, #60A5FA)'
+ : 'radial-gradient(ellipse 150% 100% at 50% 100%, #7B8694, #475569 45%, #334155 70%, #1E293B)',
+ },
+ 2: {
+ icon: `cloudy-1-${dayNight}.svg`,
+ description: 'Partly Cloudy',
+ gradient: isDarkMode
+ ? isDay
+ ? 'radial-gradient(ellipse 150% 100% at 50% 100%, #D4E1ED, #8BA3B8 35%, #617A93 60%, #426070)'
+ : 'radial-gradient(ellipse 150% 100% at 50% 100%, #6B7583, #4A5563 40%, #3A4450 65%, #2A3340)'
+ : isDay
+ ? 'radial-gradient(ellipse 150% 100% at 50% 100%, #FFFFFF, #E0F2FE 28%, #BFDBFE 58%, #93C5FD)'
+ : 'radial-gradient(ellipse 150% 100% at 50% 100%, #8B99AB, #64748B 45%, #475569 70%, #334155)',
+ },
+ 3: {
+ icon: `cloudy-1-${dayNight}.svg`,
+ description: 'Cloudy',
+ gradient: isDarkMode
+ ? 'radial-gradient(ellipse 150% 100% at 50% 100%, #B8C3CF, #758190 38%, #546270 65%, #3D4A58)'
+ : 'radial-gradient(ellipse 150% 100% at 50% 100%, #F5F8FA, #CBD5E1 32%, #94A3B8 65%, #64748B)',
+ },
+ 45: {
+ icon: `fog-${dayNight}.svg`,
+ description: 'Foggy',
+ gradient: isDarkMode
+ ? 'radial-gradient(ellipse 150% 100% at 50% 100%, #C5CDD8, #8892A0 38%, #697380 65%, #4F5A68)'
+ : 'radial-gradient(ellipse 150% 100% at 50% 100%, #FFFFFF, #E2E8F0 30%, #CBD5E1 62%, #94A3B8)',
+ },
+ 48: {
+ icon: `fog-${dayNight}.svg`,
+ description: 'Rime Fog',
+ gradient: isDarkMode
+ ? 'radial-gradient(ellipse 150% 100% at 50% 100%, #C5CDD8, #8892A0 38%, #697380 65%, #4F5A68)'
+ : 'radial-gradient(ellipse 150% 100% at 50% 100%, #FFFFFF, #E2E8F0 30%, #CBD5E1 62%, #94A3B8)',
+ },
+ 51: {
+ icon: `rainy-1-${dayNight}.svg`,
+ description: 'Light Drizzle',
+ gradient: isDarkMode
+ ? 'radial-gradient(ellipse 150% 100% at 50% 100%, #B8D4E5, #6FA4C5 35%, #4A85AC 60%, #356A8E)'
+ : 'radial-gradient(ellipse 150% 100% at 50% 100%, #E5FBFF, #A5F3FC 28%, #67E8F9 60%, #22D3EE)',
+ },
+ 53: {
+ icon: `rainy-1-${dayNight}.svg`,
+ description: 'Drizzle',
+ gradient: isDarkMode
+ ? 'radial-gradient(ellipse 150% 100% at 50% 100%, #B8D4E5, #6FA4C5 35%, #4A85AC 60%, #356A8E)'
+ : 'radial-gradient(ellipse 150% 100% at 50% 100%, #E5FBFF, #A5F3FC 28%, #67E8F9 60%, #22D3EE)',
+ },
+ 55: {
+ icon: `rainy-2-${dayNight}.svg`,
+ description: 'Heavy Drizzle',
+ gradient: isDarkMode
+ ? 'radial-gradient(ellipse 150% 100% at 50% 100%, #A5C5D8, #5E92B0 35%, #3F789D 60%, #2A5F82)'
+ : 'radial-gradient(ellipse 150% 100% at 50% 100%, #D4F3FF, #7DD3FC 30%, #38BDF8 62%, #0EA5E9)',
+ },
+ 61: {
+ icon: `rainy-2-${dayNight}.svg`,
+ description: 'Light Rain',
+ gradient: isDarkMode
+ ? 'radial-gradient(ellipse 150% 100% at 50% 100%, #A5C5D8, #5E92B0 35%, #3F789D 60%, #2A5F82)'
+ : 'radial-gradient(ellipse 150% 100% at 50% 100%, #D4F3FF, #7DD3FC 30%, #38BDF8 62%, #0EA5E9)',
+ },
+ 63: {
+ icon: `rainy-2-${dayNight}.svg`,
+ description: 'Rain',
+ gradient: isDarkMode
+ ? 'radial-gradient(ellipse 150% 100% at 50% 100%, #8DB3C8, #4D819F 38%, #326A87 65%, #215570)'
+ : 'radial-gradient(ellipse 150% 100% at 50% 100%, #B8E8FF, #38BDF8 32%, #0EA5E9 65%, #0284C7)',
+ },
+ 65: {
+ icon: `rainy-3-${dayNight}.svg`,
+ description: 'Heavy Rain',
+ gradient: isDarkMode
+ ? 'radial-gradient(ellipse 150% 100% at 50% 100%, #7BA3B8, #3D6F8A 38%, #295973 65%, #1A455D)'
+ : 'radial-gradient(ellipse 150% 100% at 50% 100%, #9CD9F5, #0EA5E9 32%, #0284C7 65%, #0369A1)',
+ },
+ 71: {
+ icon: `snowy-1-${dayNight}.svg`,
+ description: 'Light Snow',
+ gradient: isDarkMode
+ ? 'radial-gradient(ellipse 150% 100% at 50% 100%, #E5F0FA, #9BB5CE 32%, #7496B8 58%, #527A9E)'
+ : 'radial-gradient(ellipse 150% 100% at 50% 100%, #FFFFFF, #F0F9FF 25%, #E0F2FE 55%, #BAE6FD)',
+ },
+ 73: {
+ icon: `snowy-2-${dayNight}.svg`,
+ description: 'Snow',
+ gradient: isDarkMode
+ ? 'radial-gradient(ellipse 150% 100% at 50% 100%, #D4E5F3, #85A1BD 35%, #6584A8 60%, #496A8E)'
+ : 'radial-gradient(ellipse 150% 100% at 50% 100%, #FAFEFF, #E0F2FE 28%, #BAE6FD 60%, #7DD3FC)',
+ },
+ 75: {
+ icon: `snowy-3-${dayNight}.svg`,
+ description: 'Heavy Snow',
+ gradient: isDarkMode
+ ? 'radial-gradient(ellipse 150% 100% at 50% 100%, #BDD8EB, #6F92AE 35%, #4F7593 60%, #365A78)'
+ : 'radial-gradient(ellipse 150% 100% at 50% 100%, #F0FAFF, #BAE6FD 30%, #7DD3FC 62%, #38BDF8)',
+ },
+ 77: {
+ icon: `snowy-1-${dayNight}.svg`,
+ description: 'Snow Grains',
+ gradient: isDarkMode
+ ? 'radial-gradient(ellipse 150% 100% at 50% 100%, #E5F0FA, #9BB5CE 32%, #7496B8 58%, #527A9E)'
+ : 'radial-gradient(ellipse 150% 100% at 50% 100%, #FFFFFF, #F0F9FF 25%, #E0F2FE 55%, #BAE6FD)',
+ },
+ 80: {
+ icon: `rainy-2-${dayNight}.svg`,
+ description: 'Light Showers',
+ gradient: isDarkMode
+ ? 'radial-gradient(ellipse 150% 100% at 50% 100%, #A5C5D8, #5E92B0 35%, #3F789D 60%, #2A5F82)'
+ : 'radial-gradient(ellipse 150% 100% at 50% 100%, #D4F3FF, #7DD3FC 30%, #38BDF8 62%, #0EA5E9)',
+ },
+ 81: {
+ icon: `rainy-2-${dayNight}.svg`,
+ description: 'Showers',
+ gradient: isDarkMode
+ ? 'radial-gradient(ellipse 150% 100% at 50% 100%, #8DB3C8, #4D819F 38%, #326A87 65%, #215570)'
+ : 'radial-gradient(ellipse 150% 100% at 50% 100%, #B8E8FF, #38BDF8 32%, #0EA5E9 65%, #0284C7)',
+ },
+ 82: {
+ icon: `rainy-3-${dayNight}.svg`,
+ description: 'Heavy Showers',
+ gradient: isDarkMode
+ ? 'radial-gradient(ellipse 150% 100% at 50% 100%, #7BA3B8, #3D6F8A 38%, #295973 65%, #1A455D)'
+ : 'radial-gradient(ellipse 150% 100% at 50% 100%, #9CD9F5, #0EA5E9 32%, #0284C7 65%, #0369A1)',
+ },
+ 85: {
+ icon: `snowy-2-${dayNight}.svg`,
+ description: 'Light Snow Showers',
+ gradient: isDarkMode
+ ? 'radial-gradient(ellipse 150% 100% at 50% 100%, #D4E5F3, #85A1BD 35%, #6584A8 60%, #496A8E)'
+ : 'radial-gradient(ellipse 150% 100% at 50% 100%, #FAFEFF, #E0F2FE 28%, #BAE6FD 60%, #7DD3FC)',
+ },
+ 86: {
+ icon: `snowy-3-${dayNight}.svg`,
+ description: 'Snow Showers',
+ gradient: isDarkMode
+ ? 'radial-gradient(ellipse 150% 100% at 50% 100%, #BDD8EB, #6F92AE 35%, #4F7593 60%, #365A78)'
+ : 'radial-gradient(ellipse 150% 100% at 50% 100%, #F0FAFF, #BAE6FD 30%, #7DD3FC 62%, #38BDF8)',
+ },
+ 95: {
+ icon: `scattered-thunderstorms-${dayNight}.svg`,
+ description: 'Thunderstorm',
+ gradient: isDarkMode
+ ? 'radial-gradient(ellipse 150% 100% at 50% 100%, #8A95A3, #5F6A7A 38%, #475260 65%, #2F3A48)'
+ : 'radial-gradient(ellipse 150% 100% at 50% 100%, #C8D1DD, #94A3B8 32%, #64748B 65%, #475569)',
+ },
+ 96: {
+ icon: 'severe-thunderstorm.svg',
+ description: 'Thunderstorm + Hail',
+ gradient: isDarkMode
+ ? 'radial-gradient(ellipse 150% 100% at 50% 100%, #7A8593, #515C6D 38%, #3A4552 65%, #242D3A)'
+ : 'radial-gradient(ellipse 150% 100% at 50% 100%, #B0BBC8, #64748B 32%, #475569 65%, #334155)',
+ },
+ 99: {
+ icon: 'severe-thunderstorm.svg',
+ description: 'Severe Thunderstorm',
+ gradient: isDarkMode
+ ? 'radial-gradient(ellipse 150% 100% at 50% 100%, #6A7583, #434E5D 40%, #2F3A47 68%, #1C2530)'
+ : 'radial-gradient(ellipse 150% 100% at 50% 100%, #9BA8B8, #475569 35%, #334155 68%, #1E293B)',
+ },
+ };
+
+ return weatherMap[code] || weatherMap[0];
+};
+
+const Weather = ({
+ location,
+ current,
+ daily,
+ timezone,
+}: WeatherWidgetProps) => {
+ const [isDarkMode, setIsDarkMode] = useState(false);
+ const unit = getMeasurementUnit();
+ const isImperial = unit === 'imperial';
+ const tempUnitLabel = isImperial ? '°F' : '°C';
+ const windUnitLabel = isImperial ? 'mph' : 'km/h';
+
+ const formatTemp = (celsius: number) => {
+ if (!Number.isFinite(celsius)) return 0;
+ return Math.round(isImperial ? (celsius * 9) / 5 + 32 : celsius);
+ };
+
+ const formatWind = (speedKmh: number) => {
+ if (!Number.isFinite(speedKmh)) return 0;
+ return Math.round(isImperial ? speedKmh * 0.621371 : speedKmh);
+ };
+
+ useEffect(() => {
+ const checkDarkMode = () => {
+ setIsDarkMode(document.documentElement.classList.contains('dark'));
+ };
+
+ checkDarkMode();
+
+ const observer = new MutationObserver(checkDarkMode);
+ observer.observe(document.documentElement, {
+ attributes: true,
+ attributeFilter: ['class'],
+ });
+
+ return () => observer.disconnect();
+ }, []);
+
+ const weatherInfo = useMemo(
+ () =>
+ getWeatherInfo(
+ current?.weather_code || 0,
+ current?.is_day === 1,
+ isDarkMode,
+ ),
+ [current?.weather_code, current?.is_day, isDarkMode],
+ );
+
+ const forecast = useMemo(() => {
+ if (!daily?.time || daily.time.length === 0) return [];
+
+ return daily.time.slice(1, 7).map((time, idx) => {
+ const date = new Date(time);
+ const dayName = date.toLocaleDateString('en-US', { weekday: 'short' });
+ const isDay = true;
+ const weatherCode = daily.weather_code[idx + 1];
+ const info = getWeatherInfo(weatherCode, isDay, isDarkMode);
+
+ return {
+ day: dayName,
+ icon: info.icon,
+ high: formatTemp(daily.temperature_2m_max[idx + 1]),
+ low: formatTemp(daily.temperature_2m_min[idx + 1]),
+ precipitation: daily.precipitation_probability_max[idx + 1] || 0,
+ };
+ });
+ }, [daily, isDarkMode, isImperial]);
+
+ if (!current || !daily || !daily.time || daily.time.length === 0) {
+ return (
+
+
+
Weather data unavailable for {location}
+
+
+ );
+ }
+
+ return (
+
+
+
+
+
+
+
+
+
+
+ {formatTemp(current.temperature_2m)}°
+
+ {tempUnitLabel}
+
+
+ {weatherInfo.description}
+
+
+
+
+
+ {formatTemp(daily.temperature_2m_max[0])}°{' '}
+ {formatTemp(daily.temperature_2m_min[0])}°
+
+
+
+
+
+
{location}
+
+ {new Date(current.time).toLocaleString('en-US', {
+ weekday: 'short',
+ hour: 'numeric',
+ minute: '2-digit',
+ })}
+
+
+
+
+ {forecast.map((day, idx) => (
+
+
{day.day}
+
+
+ {day.high}°
+
+ {day.low}°
+
+
+ {day.precipitation > 0 && (
+
+
+
+ {day.precipitation}%
+
+
+ )}
+
+ ))}
+
+
+
+
+
+
+
+ Wind
+
+
+ {formatWind(current.wind_speed_10m)} {windUnitLabel}
+
+
+
+
+
+
+
+
+ Humidity
+
+
+ {Math.round(current.relative_humidity_2m)}%
+
+
+
+
+
+
+
+
+ Feels Like
+
+
+ {formatTemp(current.apparent_temperature)}
+ {tempUnitLabel}
+
+
+
+
+
+
+ );
+};
+
+export default Weather;
diff --git a/src/lib/actions.ts b/src/lib/actions.ts
index cb75d88..c2855c0 100644
--- a/src/lib/actions.ts
+++ b/src/lib/actions.ts
@@ -1,6 +1,12 @@
-import { Message } from '@/components/ChatWindow';
+export const getSuggestions = async (chatHistory: [string, string][]) => {
+ const chatTurns = chatHistory.map(([role, content]) => {
+ if (role === 'human') {
+ return { role: 'user', content };
+ } else {
+ return { role: 'assistant', content };
+ }
+ });
-export const getSuggestions = async (chatHistory: Message[]) => {
const chatModel = localStorage.getItem('chatModelKey');
const chatModelProvider = localStorage.getItem('chatModelProviderId');
@@ -10,7 +16,7 @@ export const getSuggestions = async (chatHistory: Message[]) => {
'Content-Type': 'application/json',
},
body: JSON.stringify({
- chatHistory: chatHistory,
+ chatHistory: chatTurns,
chatModel: {
providerId: chatModelProvider,
key: chatModel,
diff --git a/src/lib/agents/media/image.ts b/src/lib/agents/media/image.ts
new file mode 100644
index 0000000..b04d532
--- /dev/null
+++ b/src/lib/agents/media/image.ts
@@ -0,0 +1,66 @@
+/* I don't think can be classified as agents but to keep the structure consistent i guess ill keep it here */
+
+import { searchSearxng } from '@/lib/searxng';
+import {
+ imageSearchFewShots,
+ imageSearchPrompt,
+} from '@/lib/prompts/media/image';
+import BaseLLM from '@/lib/models/base/llm';
+import z from 'zod';
+import { ChatTurnMessage } from '@/lib/types';
+import formatChatHistoryAsString from '@/lib/utils/formatHistory';
+
+type ImageSearchChainInput = {
+ chatHistory: ChatTurnMessage[];
+ query: string;
+};
+
+type ImageSearchResult = {
+ img_src: string;
+ url: string;
+ title: string;
+};
+
+const searchImages = async (
+ input: ImageSearchChainInput,
+ llm: BaseLLM,
+) => {
+ const schema = z.object({
+ query: z.string().describe('The image search query.'),
+ });
+
+ const res = await llm.generateObject({
+ messages: [
+ {
+ role: 'system',
+ content: imageSearchPrompt,
+ },
+ ...imageSearchFewShots,
+ {
+ role: 'user',
+ content: `\n${formatChatHistoryAsString(input.chatHistory)}\n \n\n${input.query}\n `,
+ },
+ ],
+ schema: schema,
+ });
+
+ const searchRes = await searchSearxng(res.query, {
+ engines: ['bing images', 'google images'],
+ });
+
+ const images: ImageSearchResult[] = [];
+
+ searchRes.results.forEach((result) => {
+ if (result.img_src && result.url && result.title) {
+ images.push({
+ img_src: result.img_src,
+ url: result.url,
+ title: result.title,
+ });
+ }
+ });
+
+ return images.slice(0, 10);
+};
+
+export default searchImages;
diff --git a/src/lib/agents/media/video.ts b/src/lib/agents/media/video.ts
new file mode 100644
index 0000000..c8f19b6
--- /dev/null
+++ b/src/lib/agents/media/video.ts
@@ -0,0 +1,66 @@
+import formatChatHistoryAsString from '@/lib/utils/formatHistory';
+import { searchSearxng } from '@/lib/searxng';
+import {
+ videoSearchFewShots,
+ videoSearchPrompt,
+} from '@/lib/prompts/media/videos';
+import { ChatTurnMessage } from '@/lib/types';
+import BaseLLM from '@/lib/models/base/llm';
+import z from 'zod';
+
+type VideoSearchChainInput = {
+ chatHistory: ChatTurnMessage[];
+ query: string;
+};
+
+type VideoSearchResult = {
+ img_src: string;
+ url: string;
+ title: string;
+ iframe_src: string;
+};
+
+const searchVideos = async (
+ input: VideoSearchChainInput,
+ llm: BaseLLM,
+) => {
+ const schema = z.object({
+ query: z.string().describe('The video search query.'),
+ });
+
+ const res = await llm.generateObject({
+ messages: [
+ {
+ role: 'system',
+ content: videoSearchPrompt,
+ },
+ ...videoSearchFewShots,
+ {
+ role: 'user',
+ content: `\n${formatChatHistoryAsString(input.chatHistory)}\n \n\n${input.query}\n `,
+ },
+ ],
+ schema: schema,
+ });
+
+ const searchRes = await searchSearxng(res.query, {
+ engines: ['youtube'],
+ });
+
+ const videos: VideoSearchResult[] = [];
+
+ searchRes.results.forEach((result) => {
+ if (result.thumbnail && result.url && result.title && result.iframe_src) {
+ videos.push({
+ img_src: result.thumbnail,
+ url: result.url,
+ title: result.title,
+ iframe_src: result.iframe_src,
+ });
+ }
+ });
+
+ return videos.slice(0, 10);
+};
+
+export default searchVideos;
diff --git a/src/lib/agents/search/api.ts b/src/lib/agents/search/api.ts
new file mode 100644
index 0000000..924bc68
--- /dev/null
+++ b/src/lib/agents/search/api.ts
@@ -0,0 +1,99 @@
+import { ResearcherOutput, SearchAgentInput } from './types';
+import SessionManager from '@/lib/session';
+import { classify } from './classifier';
+import Researcher from './researcher';
+import { getWriterPrompt } from '@/lib/prompts/search/writer';
+import { WidgetExecutor } from './widgets';
+
+class APISearchAgent {
+ async searchAsync(session: SessionManager, input: SearchAgentInput) {
+ const classification = await classify({
+ chatHistory: input.chatHistory,
+ enabledSources: input.config.sources,
+ query: input.followUp,
+ llm: input.config.llm,
+ });
+
+ const widgetPromise = WidgetExecutor.executeAll({
+ classification,
+ chatHistory: input.chatHistory,
+ followUp: input.followUp,
+ llm: input.config.llm,
+ });
+
+ let searchPromise: Promise | null = null;
+
+ if (!classification.classification.skipSearch) {
+ const researcher = new Researcher();
+ searchPromise = researcher.research(SessionManager.createSession(), {
+ chatHistory: input.chatHistory,
+ followUp: input.followUp,
+ classification: classification,
+ config: input.config,
+ });
+ }
+
+ const [widgetOutputs, searchResults] = await Promise.all([
+ widgetPromise,
+ searchPromise,
+ ]);
+
+ if (searchResults) {
+ session.emit('data', {
+ type: 'searchResults',
+ data: searchResults.searchFindings,
+ });
+ }
+
+ session.emit('data', {
+ type: 'researchComplete',
+ });
+
+ const finalContext =
+ searchResults?.searchFindings
+ .map(
+ (f, index) =>
+ `${f.content} `,
+ )
+ .join('\n') || '';
+
+ const widgetContext = widgetOutputs
+ .map((o) => {
+ return `${o.llmContext} `;
+ })
+ .join('\n-------------\n');
+
+ const finalContextWithWidgets = `\n${finalContext}\n \n\n${widgetContext}\n `;
+
+ const writerPrompt = getWriterPrompt(
+ finalContextWithWidgets,
+ input.config.systemInstructions,
+ input.config.mode,
+ );
+
+ const answerStream = input.config.llm.streamText({
+ messages: [
+ {
+ role: 'system',
+ content: writerPrompt,
+ },
+ ...input.chatHistory,
+ {
+ role: 'user',
+ content: input.followUp,
+ },
+ ],
+ });
+
+ for await (const chunk of answerStream) {
+ session.emit('data', {
+ type: 'response',
+ data: chunk.contentChunk,
+ });
+ }
+
+ session.emit('end', {});
+ }
+}
+
+export default APISearchAgent;
diff --git a/src/lib/agents/search/classifier.ts b/src/lib/agents/search/classifier.ts
new file mode 100644
index 0000000..5909e35
--- /dev/null
+++ b/src/lib/agents/search/classifier.ts
@@ -0,0 +1,53 @@
+import z from 'zod';
+import { ClassifierInput } from './types';
+import { classifierPrompt } from '@/lib/prompts/search/classifier';
+import formatChatHistoryAsString from '@/lib/utils/formatHistory';
+
+const schema = z.object({
+ classification: z.object({
+ skipSearch: z
+ .boolean()
+ .describe('Indicates whether to skip the search step.'),
+ personalSearch: z
+ .boolean()
+ .describe('Indicates whether to perform a personal search.'),
+ academicSearch: z
+ .boolean()
+ .describe('Indicates whether to perform an academic search.'),
+ discussionSearch: z
+ .boolean()
+ .describe('Indicates whether to perform a discussion search.'),
+ showWeatherWidget: z
+ .boolean()
+ .describe('Indicates whether to show the weather widget.'),
+ showStockWidget: z
+ .boolean()
+ .describe('Indicates whether to show the stock widget.'),
+ showCalculationWidget: z
+ .boolean()
+ .describe('Indicates whether to show the calculation widget.'),
+ }),
+ standaloneFollowUp: z
+ .string()
+ .describe(
+ "A self-contained, context-independent reformulation of the user's question.",
+ ),
+});
+
+export const classify = async (input: ClassifierInput) => {
+ const output = await input.llm.generateObject({
+ messages: [
+ {
+ role: 'system',
+ content: classifierPrompt,
+ },
+ {
+ role: 'user',
+ content: `\n${formatChatHistoryAsString(input.chatHistory)}\n \n\n${input.query}\n `,
+ },
+ ],
+ schema,
+ });
+
+ return output;
+};
diff --git a/src/lib/agents/search/index.ts b/src/lib/agents/search/index.ts
new file mode 100644
index 0000000..8591832
--- /dev/null
+++ b/src/lib/agents/search/index.ts
@@ -0,0 +1,186 @@
+import { ResearcherOutput, SearchAgentInput } from './types';
+import SessionManager from '@/lib/session';
+import { classify } from './classifier';
+import Researcher from './researcher';
+import { getWriterPrompt } from '@/lib/prompts/search/writer';
+import { WidgetExecutor } from './widgets';
+import db from '@/lib/db';
+import { chats, messages } from '@/lib/db/schema';
+import { and, eq, gt } from 'drizzle-orm';
+import { TextBlock } from '@/lib/types';
+
+class SearchAgent {
+ async searchAsync(session: SessionManager, input: SearchAgentInput) {
+ const exists = await db.query.messages.findFirst({
+ where: and(
+ eq(messages.chatId, input.chatId),
+ eq(messages.messageId, input.messageId),
+ ),
+ });
+
+ if (!exists) {
+ await db.insert(messages).values({
+ chatId: input.chatId,
+ messageId: input.messageId,
+ backendId: session.id,
+ query: input.followUp,
+ createdAt: new Date().toISOString(),
+ status: 'answering',
+ responseBlocks: [],
+ });
+ } else {
+ await db
+ .delete(messages)
+ .where(
+ and(eq(messages.chatId, input.chatId), gt(messages.id, exists.id)),
+ )
+ .execute();
+ await db
+ .update(messages)
+ .set({
+ status: 'answering',
+ backendId: session.id,
+ responseBlocks: [],
+ })
+ .where(
+ and(
+ eq(messages.chatId, input.chatId),
+ eq(messages.messageId, input.messageId),
+ ),
+ )
+ .execute();
+ }
+
+ const classification = await classify({
+ chatHistory: input.chatHistory,
+ enabledSources: input.config.sources,
+ query: input.followUp,
+ llm: input.config.llm,
+ });
+
+ const widgetPromise = WidgetExecutor.executeAll({
+ classification,
+ chatHistory: input.chatHistory,
+ followUp: input.followUp,
+ llm: input.config.llm,
+ }).then((widgetOutputs) => {
+ widgetOutputs.forEach((o) => {
+ session.emitBlock({
+ id: crypto.randomUUID(),
+ type: 'widget',
+ data: {
+ widgetType: o.type,
+ params: o.data,
+ },
+ });
+ });
+ return widgetOutputs;
+ });
+
+ let searchPromise: Promise | null = null;
+
+ if (!classification.classification.skipSearch) {
+ const researcher = new Researcher();
+ searchPromise = researcher.research(session, {
+ chatHistory: input.chatHistory,
+ followUp: input.followUp,
+ classification: classification,
+ config: input.config,
+ });
+ }
+
+ const [widgetOutputs, searchResults] = await Promise.all([
+ widgetPromise,
+ searchPromise,
+ ]);
+
+ session.emit('data', {
+ type: 'researchComplete',
+ });
+
+ const finalContext =
+ searchResults?.searchFindings
+ .map(
+ (f, index) =>
+ `${f.content} `,
+ )
+ .join('\n') || '';
+
+ const widgetContext = widgetOutputs
+ .map((o) => {
+ return `${o.llmContext} `;
+ })
+ .join('\n-------------\n');
+
+ const finalContextWithWidgets = `\n${finalContext}\n \n\n${widgetContext}\n `;
+
+ const writerPrompt = getWriterPrompt(
+ finalContextWithWidgets,
+ input.config.systemInstructions,
+ input.config.mode,
+ );
+ const answerStream = input.config.llm.streamText({
+ messages: [
+ {
+ role: 'system',
+ content: writerPrompt,
+ },
+ ...input.chatHistory,
+ {
+ role: 'user',
+ content: input.followUp,
+ },
+ ],
+ });
+
+ let responseBlockId = '';
+
+ for await (const chunk of answerStream) {
+ if (!responseBlockId) {
+ const block: TextBlock = {
+ id: crypto.randomUUID(),
+ type: 'text',
+ data: chunk.contentChunk,
+ };
+
+ session.emitBlock(block);
+
+ responseBlockId = block.id;
+ } else {
+ const block = session.getBlock(responseBlockId) as TextBlock | null;
+
+ if (!block) {
+ continue;
+ }
+
+ block.data += chunk.contentChunk;
+
+ session.updateBlock(block.id, [
+ {
+ op: 'replace',
+ path: '/data',
+ value: block.data,
+ },
+ ]);
+ }
+ }
+
+ session.emit('end', {});
+
+ await db
+ .update(messages)
+ .set({
+ status: 'completed',
+ responseBlocks: session.getAllBlocks(),
+ })
+ .where(
+ and(
+ eq(messages.chatId, input.chatId),
+ eq(messages.messageId, input.messageId),
+ ),
+ )
+ .execute();
+ }
+}
+
+export default SearchAgent;
diff --git a/src/lib/agents/search/researcher/actions/academicSearch.ts b/src/lib/agents/search/researcher/actions/academicSearch.ts
new file mode 100644
index 0000000..72e1f4b
--- /dev/null
+++ b/src/lib/agents/search/researcher/actions/academicSearch.ts
@@ -0,0 +1,129 @@
+import z from 'zod';
+import { ResearchAction } from '../../types';
+import { Chunk, SearchResultsResearchBlock } from '@/lib/types';
+import { searchSearxng } from '@/lib/searxng';
+
+const schema = z.object({
+ queries: z.array(z.string()).describe('List of academic search queries'),
+});
+
+const academicSearchDescription = `
+Use this tool to perform academic searches for scholarly articles, papers, and research studies relevant to the user's query. Provide a list of concise search queries that will help gather comprehensive academic information on the topic at hand.
+You can provide up to 3 queries at a time. Make sure the queries are specific and relevant to the user's needs.
+
+For example, if the user is interested in recent advancements in renewable energy, your queries could be:
+1. "Recent advancements in renewable energy 2024"
+2. "Cutting-edge research on solar power technologies"
+3. "Innovations in wind energy systems"
+
+If this tool is present and no other tools are more relevant, you MUST use this tool to get the needed academic information.
+`;
+
+const academicSearchAction: ResearchAction = {
+ name: 'academic_search',
+ schema: schema,
+ getDescription: () => academicSearchDescription,
+ getToolDescription: () =>
+ "Use this tool to perform academic searches for scholarly articles, papers, and research studies relevant to the user's query. Provide a list of concise search queries that will help gather comprehensive academic information on the topic at hand.",
+ enabled: (config) =>
+ config.sources.includes('academic') &&
+ config.classification.classification.skipSearch === false &&
+ config.classification.classification.academicSearch === true,
+ execute: async (input, additionalConfig) => {
+ input.queries = input.queries.slice(0, 3);
+
+ const researchBlock = additionalConfig.session.getBlock(
+ additionalConfig.researchBlockId,
+ );
+
+ if (researchBlock && researchBlock.type === 'research') {
+ researchBlock.data.subSteps.push({
+ type: 'searching',
+ id: crypto.randomUUID(),
+ searching: input.queries,
+ });
+
+ additionalConfig.session.updateBlock(additionalConfig.researchBlockId, [
+ {
+ op: 'replace',
+ path: '/data/subSteps',
+ value: researchBlock.data.subSteps,
+ },
+ ]);
+ }
+
+ const searchResultsBlockId = crypto.randomUUID();
+ let searchResultsEmitted = false;
+
+ let results: Chunk[] = [];
+
+ const search = async (q: string) => {
+ const res = await searchSearxng(q, {
+ engines: ['arxiv', 'google scholar', 'pubmed'],
+ });
+
+ const resultChunks: Chunk[] = res.results.map((r) => ({
+ content: r.content || r.title,
+ metadata: {
+ title: r.title,
+ url: r.url,
+ },
+ }));
+
+ results.push(...resultChunks);
+
+ if (
+ !searchResultsEmitted &&
+ researchBlock &&
+ researchBlock.type === 'research'
+ ) {
+ searchResultsEmitted = true;
+
+ researchBlock.data.subSteps.push({
+ id: searchResultsBlockId,
+ type: 'search_results',
+ reading: resultChunks,
+ });
+
+ additionalConfig.session.updateBlock(additionalConfig.researchBlockId, [
+ {
+ op: 'replace',
+ path: '/data/subSteps',
+ value: researchBlock.data.subSteps,
+ },
+ ]);
+ } else if (
+ searchResultsEmitted &&
+ researchBlock &&
+ researchBlock.type === 'research'
+ ) {
+ const subStepIndex = researchBlock.data.subSteps.findIndex(
+ (step) => step.id === searchResultsBlockId,
+ );
+
+ const subStep = researchBlock.data.subSteps[
+ subStepIndex
+ ] as SearchResultsResearchBlock;
+
+ subStep.reading.push(...resultChunks);
+
+ additionalConfig.session.updateBlock(additionalConfig.researchBlockId, [
+ {
+ op: 'replace',
+ path: '/data/subSteps',
+ value: researchBlock.data.subSteps,
+ },
+ ]);
+ }
+ };
+
+ await Promise.all(input.queries.map(search));
+
+ return {
+ type: 'search_results',
+ results,
+ };
+ },
+};
+
+export default academicSearchAction;
diff --git a/src/lib/agents/search/researcher/actions/done.ts b/src/lib/agents/search/researcher/actions/done.ts
new file mode 100644
index 0000000..d2c4ed6
--- /dev/null
+++ b/src/lib/agents/search/researcher/actions/done.ts
@@ -0,0 +1,24 @@
+import z from 'zod';
+import { ResearchAction } from '../../types';
+
+const actionDescription = `
+Use this action ONLY when you have completed all necessary research and are ready to provide a final answer to the user. This indicates that you have gathered sufficient information from previous steps and are concluding the research process.
+YOU MUST CALL THIS ACTION TO SIGNAL COMPLETION; DO NOT OUTPUT FINAL ANSWERS DIRECTLY TO THE USER.
+IT WILL BE AUTOMATICALLY TRIGGERED IF MAXIMUM ITERATIONS ARE REACHED SO IF YOU'RE LOW ON ITERATIONS, DON'T CALL IT AND INSTEAD FOCUS ON GATHERING ESSENTIAL INFO FIRST.
+`;
+
+const doneAction: ResearchAction = {
+ name: 'done',
+ schema: z.object({}),
+ getToolDescription: () =>
+ 'Only call this after __reasoning_preamble AND after any other needed tool calls when you truly have enough to answer. Do not call if information is still missing.',
+ getDescription: () => actionDescription,
+ enabled: (_) => true,
+ execute: async (params, additionalConfig) => {
+ return {
+ type: 'done',
+ };
+ },
+};
+
+export default doneAction;
diff --git a/src/lib/agents/search/researcher/actions/index.ts b/src/lib/agents/search/researcher/actions/index.ts
new file mode 100644
index 0000000..8864c08
--- /dev/null
+++ b/src/lib/agents/search/researcher/actions/index.ts
@@ -0,0 +1,18 @@
+import academicSearchAction from './academicSearch';
+import doneAction from './done';
+import planAction from './plan';
+import ActionRegistry from './registry';
+import scrapeURLAction from './scrapeURL';
+import socialSearchAction from './socialSearch';
+import uploadsSearchAction from './uploadsSearch';
+import webSearchAction from './webSearch';
+
+ActionRegistry.register(webSearchAction);
+ActionRegistry.register(doneAction);
+ActionRegistry.register(planAction);
+ActionRegistry.register(scrapeURLAction);
+ActionRegistry.register(uploadsSearchAction);
+ActionRegistry.register(academicSearchAction);
+ActionRegistry.register(socialSearchAction);
+
+export { ActionRegistry };
diff --git a/src/lib/agents/search/researcher/actions/plan.ts b/src/lib/agents/search/researcher/actions/plan.ts
new file mode 100644
index 0000000..32ea623
--- /dev/null
+++ b/src/lib/agents/search/researcher/actions/plan.ts
@@ -0,0 +1,40 @@
+import z from 'zod';
+import { ResearchAction } from '../../types';
+
+const schema = z.object({
+ plan: z
+ .string()
+ .describe(
+ 'A concise natural-language plan in one short paragraph. Open with a short intent phrase (e.g., "Okay, the user wants to...", "Searching for...", "Looking into...") and lay out the steps you will take.',
+ ),
+});
+
+const actionDescription = `
+Use this tool FIRST on every turn to state your plan in natural language before any other action. Keep it short, action-focused, and tailored to the current query.
+Make sure to not include reference to any tools or actions you might take, just the plan itself. The user isn't aware about tools, but they love to see your thought process.
+
+Here are some examples of good plans:
+
+- "Okay, the user wants to know the latest advancements in renewable energy. I will start by looking for recent articles and studies on this topic, then summarize the key points." -> "I have gathered enough information to provide a comprehensive answer."
+- "The user is asking about the health benefits of a Mediterranean diet. I will search for scientific studies and expert opinions on this diet, then compile the findings into a clear summary." -> "I have gathered information about the Mediterranean diet and its health benefits, I will now look up for any recent studies to ensure the information is current."
+
+
+YOU CAN NEVER CALL ANY OTHER TOOL BEFORE CALLING THIS ONE FIRST, IF YOU DO, THAT CALL WOULD BE IGNORED.
+`;
+
+const planAction: ResearchAction = {
+ name: '__reasoning_preamble',
+ schema: schema,
+ getToolDescription: () =>
+ 'Use this FIRST on every turn to state your plan in natural language before any other action. Keep it short, action-focused, and tailored to the current query.',
+ getDescription: () => actionDescription,
+ enabled: (config) => config.mode !== 'speed',
+ execute: async (input, _) => {
+ return {
+ type: 'reasoning',
+ reasoning: input.plan,
+ };
+ },
+};
+
+export default planAction;
diff --git a/src/lib/agents/search/researcher/actions/registry.ts b/src/lib/agents/search/researcher/actions/registry.ts
new file mode 100644
index 0000000..a8de513
--- /dev/null
+++ b/src/lib/agents/search/researcher/actions/registry.ts
@@ -0,0 +1,105 @@
+import { Tool, ToolCall } from '@/lib/models/types';
+import {
+ ActionOutput,
+ AdditionalConfig,
+ ClassifierOutput,
+ ResearchAction,
+ SearchAgentConfig,
+ SearchSources,
+} from '../../types';
+
+class ActionRegistry {
+ private static actions: Map = new Map();
+
+ static register(action: ResearchAction) {
+ this.actions.set(action.name, action);
+ }
+
+ static get(name: string): ResearchAction | undefined {
+ return this.actions.get(name);
+ }
+
+ static getAvailableActions(config: {
+ classification: ClassifierOutput;
+ fileIds: string[];
+ mode: SearchAgentConfig['mode'];
+ sources: SearchSources[];
+ }): ResearchAction[] {
+ return Array.from(
+ this.actions.values().filter((action) => action.enabled(config)),
+ );
+ }
+
+ static getAvailableActionTools(config: {
+ classification: ClassifierOutput;
+ fileIds: string[];
+ mode: SearchAgentConfig['mode'];
+ sources: SearchSources[];
+ }): Tool[] {
+ const availableActions = this.getAvailableActions(config);
+
+ return availableActions.map((action) => ({
+ name: action.name,
+ description: action.getToolDescription({ mode: config.mode }),
+ schema: action.schema,
+ }));
+ }
+
+ static getAvailableActionsDescriptions(config: {
+ classification: ClassifierOutput;
+ fileIds: string[];
+ mode: SearchAgentConfig['mode'];
+ sources: SearchSources[];
+ }): string {
+ const availableActions = this.getAvailableActions(config);
+
+ return availableActions
+ .map(
+ (action) =>
+ `\n${action.getDescription({ mode: config.mode })}\n `,
+ )
+ .join('\n\n');
+ }
+
+ static async execute(
+ name: string,
+ params: any,
+ additionalConfig: AdditionalConfig & {
+ researchBlockId: string;
+ fileIds: string[];
+ },
+ ) {
+ const action = this.actions.get(name);
+
+ if (!action) {
+ throw new Error(`Action with name ${name} not found`);
+ }
+
+ return action.execute(params, additionalConfig);
+ }
+
+ static async executeAll(
+ actions: ToolCall[],
+ additionalConfig: AdditionalConfig & {
+ researchBlockId: string;
+ fileIds: string[];
+ },
+ ): Promise {
+ const results: ActionOutput[] = [];
+
+ await Promise.all(
+ actions.map(async (actionConfig) => {
+ const output = await this.execute(
+ actionConfig.name,
+ actionConfig.arguments,
+ additionalConfig,
+ );
+ results.push(output);
+ }),
+ );
+
+ return results;
+ }
+}
+
+export default ActionRegistry;
diff --git a/src/lib/agents/search/researcher/actions/scrapeURL.ts b/src/lib/agents/search/researcher/actions/scrapeURL.ts
new file mode 100644
index 0000000..c702a70
--- /dev/null
+++ b/src/lib/agents/search/researcher/actions/scrapeURL.ts
@@ -0,0 +1,139 @@
+import z from 'zod';
+import { ResearchAction } from '../../types';
+import { Chunk, ReadingResearchBlock } from '@/lib/types';
+import TurnDown from 'turndown';
+import path from 'path';
+
+const turndownService = new TurnDown();
+
+const schema = z.object({
+ urls: z.array(z.string()).describe('A list of URLs to scrape content from.'),
+});
+
+const actionDescription = `
+Use this tool to scrape and extract content from the provided URLs. This is useful when you the user has asked you to extract or summarize information from specific web pages. You can provide up to 3 URLs at a time. NEVER CALL THIS TOOL EXPLICITLY YOURSELF UNLESS INSTRUCTED TO DO SO BY THE USER.
+You should only call this tool when the user has specifically requested information from certain web pages, never call this yourself to get extra information without user instruction.
+
+For example, if the user says "Please summarize the content of https://example.com/article", you can call this tool with that URL to get the content and then provide the summary or "What does X mean according to https://example.com/page", you can call this tool with that URL to get the content and provide the explanation.
+`;
+
+const scrapeURLAction: ResearchAction = {
+ name: 'scrape_url',
+ schema: schema,
+ getToolDescription: () =>
+ 'Use this tool to scrape and extract content from the provided URLs. This is useful when you the user has asked you to extract or summarize information from specific web pages. You can provide up to 3 URLs at a time. NEVER CALL THIS TOOL EXPLICITLY YOURSELF UNLESS INSTRUCTED TO DO SO BY THE USER.',
+ getDescription: () => actionDescription,
+ enabled: (_) => true,
+ execute: async (params, additionalConfig) => {
+ params.urls = params.urls.slice(0, 3);
+
+ let readingBlockId = crypto.randomUUID();
+ let readingEmitted = false;
+
+ const researchBlock = additionalConfig.session.getBlock(
+ additionalConfig.researchBlockId,
+ );
+
+ const results: Chunk[] = [];
+
+ await Promise.all(
+ params.urls.map(async (url) => {
+ try {
+ const res = await fetch(url);
+ const text = await res.text();
+
+ const title =
+ text.match(/(.*?)<\/title>/i)?.[1] || `Content from ${url}`;
+
+ if (
+ !readingEmitted &&
+ researchBlock &&
+ researchBlock.type === 'research'
+ ) {
+ readingEmitted = true;
+ researchBlock.data.subSteps.push({
+ id: readingBlockId,
+ type: 'reading',
+ reading: [
+ {
+ content: '',
+ metadata: {
+ url,
+ title: title,
+ },
+ },
+ ],
+ });
+
+ additionalConfig.session.updateBlock(
+ additionalConfig.researchBlockId,
+ [
+ {
+ op: 'replace',
+ path: '/data/subSteps',
+ value: researchBlock.data.subSteps,
+ },
+ ],
+ );
+ } else if (
+ readingEmitted &&
+ researchBlock &&
+ researchBlock.type === 'research'
+ ) {
+ const subStepIndex = researchBlock.data.subSteps.findIndex(
+ (step: any) => step.id === readingBlockId,
+ );
+
+ const subStep = researchBlock.data.subSteps[
+ subStepIndex
+ ] as ReadingResearchBlock;
+
+ subStep.reading.push({
+ content: '',
+ metadata: {
+ url,
+ title: title,
+ },
+ });
+
+ additionalConfig.session.updateBlock(
+ additionalConfig.researchBlockId,
+ [
+ {
+ op: 'replace',
+ path: '/data/subSteps',
+ value: researchBlock.data.subSteps,
+ },
+ ],
+ );
+ }
+
+ const markdown = turndownService.turndown(text);
+
+ results.push({
+ content: markdown,
+ metadata: {
+ url,
+ title: title,
+ },
+ });
+ } catch (error) {
+ results.push({
+ content: `Failed to fetch content from ${url}: ${error}`,
+ metadata: {
+ url,
+ title: `Error fetching ${url}`,
+ },
+ });
+ }
+ }),
+ );
+
+ return {
+ type: 'search_results',
+ results,
+ };
+ },
+};
+
+export default scrapeURLAction;
diff --git a/src/lib/agents/search/researcher/actions/socialSearch.ts b/src/lib/agents/search/researcher/actions/socialSearch.ts
new file mode 100644
index 0000000..16468ab
--- /dev/null
+++ b/src/lib/agents/search/researcher/actions/socialSearch.ts
@@ -0,0 +1,129 @@
+import z from 'zod';
+import { ResearchAction } from '../../types';
+import { Chunk, SearchResultsResearchBlock } from '@/lib/types';
+import { searchSearxng } from '@/lib/searxng';
+
+const schema = z.object({
+ queries: z.array(z.string()).describe('List of social search queries'),
+});
+
+const socialSearchDescription = `
+Use this tool to perform social media searches for relevant posts, discussions, and trends related to the user's query. Provide a list of concise search queries that will help gather comprehensive social media information on the topic at hand.
+You can provide up to 3 queries at a time. Make sure the queries are specific and relevant to the user's needs.
+
+For example, if the user is interested in public opinion on electric vehicles, your queries could be:
+1. "Electric vehicles public opinion 2024"
+2. "Social media discussions on EV adoption"
+3. "Trends in electric vehicle usage"
+
+If this tool is present and no other tools are more relevant, you MUST use this tool to get the needed social media information.
+`;
+
+const socialSearchAction: ResearchAction = {
+ name: 'social_search',
+ schema: schema,
+ getDescription: () => socialSearchDescription,
+ getToolDescription: () =>
+ "Use this tool to perform social media searches for relevant posts, discussions, and trends related to the user's query. Provide a list of concise search queries that will help gather comprehensive social media information on the topic at hand.",
+ enabled: (config) =>
+ config.sources.includes('discussions') &&
+ config.classification.classification.skipSearch === false &&
+ config.classification.classification.discussionSearch === true,
+ execute: async (input, additionalConfig) => {
+ input.queries = input.queries.slice(0, 3);
+
+ const researchBlock = additionalConfig.session.getBlock(
+ additionalConfig.researchBlockId,
+ );
+
+ if (researchBlock && researchBlock.type === 'research') {
+ researchBlock.data.subSteps.push({
+ type: 'searching',
+ id: crypto.randomUUID(),
+ searching: input.queries,
+ });
+
+ additionalConfig.session.updateBlock(additionalConfig.researchBlockId, [
+ {
+ op: 'replace',
+ path: '/data/subSteps',
+ value: researchBlock.data.subSteps,
+ },
+ ]);
+ }
+
+ const searchResultsBlockId = crypto.randomUUID();
+ let searchResultsEmitted = false;
+
+ let results: Chunk[] = [];
+
+ const search = async (q: string) => {
+ const res = await searchSearxng(q, {
+ engines: ['reddit'],
+ });
+
+ const resultChunks: Chunk[] = res.results.map((r) => ({
+ content: r.content || r.title,
+ metadata: {
+ title: r.title,
+ url: r.url,
+ },
+ }));
+
+ results.push(...resultChunks);
+
+ if (
+ !searchResultsEmitted &&
+ researchBlock &&
+ researchBlock.type === 'research'
+ ) {
+ searchResultsEmitted = true;
+
+ researchBlock.data.subSteps.push({
+ id: searchResultsBlockId,
+ type: 'search_results',
+ reading: resultChunks,
+ });
+
+ additionalConfig.session.updateBlock(additionalConfig.researchBlockId, [
+ {
+ op: 'replace',
+ path: '/data/subSteps',
+ value: researchBlock.data.subSteps,
+ },
+ ]);
+ } else if (
+ searchResultsEmitted &&
+ researchBlock &&
+ researchBlock.type === 'research'
+ ) {
+ const subStepIndex = researchBlock.data.subSteps.findIndex(
+ (step) => step.id === searchResultsBlockId,
+ );
+
+ const subStep = researchBlock.data.subSteps[
+ subStepIndex
+ ] as SearchResultsResearchBlock;
+
+ subStep.reading.push(...resultChunks);
+
+ additionalConfig.session.updateBlock(additionalConfig.researchBlockId, [
+ {
+ op: 'replace',
+ path: '/data/subSteps',
+ value: researchBlock.data.subSteps,
+ },
+ ]);
+ }
+ };
+
+ await Promise.all(input.queries.map(search));
+
+ return {
+ type: 'search_results',
+ results,
+ };
+ },
+};
+
+export default socialSearchAction;
diff --git a/src/lib/agents/search/researcher/actions/uploadsSearch.ts b/src/lib/agents/search/researcher/actions/uploadsSearch.ts
new file mode 100644
index 0000000..8195063
--- /dev/null
+++ b/src/lib/agents/search/researcher/actions/uploadsSearch.ts
@@ -0,0 +1,102 @@
+import z from 'zod';
+import { ResearchAction } from '../../types';
+import UploadStore from '@/lib/uploads/store';
+
+const schema = z.object({
+ queries: z
+ .array(z.string())
+ .describe(
+ 'A list of queries to search in user uploaded files. Can be a maximum of 3 queries.',
+ ),
+});
+
+const uploadsSearchAction: ResearchAction = {
+ name: 'uploads_search',
+ enabled: (config) =>
+ (config.classification.classification.personalSearch &&
+ config.fileIds.length > 0) ||
+ config.fileIds.length > 0,
+ schema,
+ getToolDescription: () =>
+ `Use this tool to perform searches over the user's uploaded files. This is useful when you need to gather information from the user's documents to answer their questions. You can provide up to 3 queries at a time. You will have to use this every single time if this is present and relevant.`,
+ getDescription: () => `
+ Use this tool to perform searches over the user's uploaded files. This is useful when you need to gather information from the user's documents to answer their questions. You can provide up to 3 queries at a time. You will have to use this every single time if this is present and relevant.
+ Always ensure that the queries you use are directly relevant to the user's request and pertain to the content of their uploaded files.
+
+ For example, if the user says "Please find information about X in my uploaded documents", you can call this tool with a query related to X to retrieve the relevant information from their files.
+ Never use this tool to search the web or for information that is not contained within the user's uploaded files.
+ `,
+ execute: async (input, additionalConfig) => {
+ input.queries = input.queries.slice(0, 3);
+
+ const researchBlock = additionalConfig.session.getBlock(
+ additionalConfig.researchBlockId,
+ );
+
+ if (researchBlock && researchBlock.type === 'research') {
+ researchBlock.data.subSteps.push({
+ id: crypto.randomUUID(),
+ type: 'upload_searching',
+ queries: input.queries,
+ });
+
+ additionalConfig.session.updateBlock(additionalConfig.researchBlockId, [
+ {
+ op: 'replace',
+ path: '/data/subSteps',
+ value: researchBlock.data.subSteps,
+ },
+ ]);
+ }
+
+ const uploadStore = new UploadStore({
+ embeddingModel: additionalConfig.embedding,
+ fileIds: additionalConfig.fileIds,
+ });
+
+ const results = await uploadStore.query(input.queries, 10);
+
+ const seenIds = new Map();
+
+ const filteredSearchResults = results
+ .map((result, index) => {
+ if (result.metadata.url && !seenIds.has(result.metadata.url)) {
+ seenIds.set(result.metadata.url, index);
+ return result;
+ } else if (result.metadata.url && seenIds.has(result.metadata.url)) {
+ const existingIndex = seenIds.get(result.metadata.url)!;
+ const existingResult = results[existingIndex];
+
+ existingResult.content += `\n\n${result.content}`;
+
+ return undefined;
+ }
+
+ return result;
+ })
+ .filter((r) => r !== undefined);
+
+ if (researchBlock && researchBlock.type === 'research') {
+ researchBlock.data.subSteps.push({
+ id: crypto.randomUUID(),
+ type: 'upload_search_results',
+ results: filteredSearchResults,
+ });
+
+ additionalConfig.session.updateBlock(additionalConfig.researchBlockId, [
+ {
+ op: 'replace',
+ path: '/data/subSteps',
+ value: researchBlock.data.subSteps,
+ },
+ ]);
+ }
+
+ return {
+ type: 'search_results',
+ results: filteredSearchResults,
+ };
+ },
+};
+
+export default uploadsSearchAction;
diff --git a/src/lib/agents/search/researcher/actions/webSearch.ts b/src/lib/agents/search/researcher/actions/webSearch.ts
new file mode 100644
index 0000000..4d60b79
--- /dev/null
+++ b/src/lib/agents/search/researcher/actions/webSearch.ts
@@ -0,0 +1,182 @@
+import z from 'zod';
+import { ResearchAction } from '../../types';
+import { searchSearxng } from '@/lib/searxng';
+import { Chunk, SearchResultsResearchBlock } from '@/lib/types';
+
+const actionSchema = z.object({
+ type: z.literal('web_search'),
+ queries: z
+ .array(z.string())
+ .describe('An array of search queries to perform web searches for.'),
+});
+
+const speedModePrompt = `
+Use this tool to perform web searches based on the provided queries. This is useful when you need to gather information from the web to answer the user's questions. You can provide up to 3 queries at a time. You will have to use this every single time if this is present and relevant.
+You are currently on speed mode, meaning you would only get to call this tool once. Make sure to prioritize the most important queries that are likely to get you the needed information in one go.
+
+Your queries should be very targeted and specific to the information you need, avoid broad or generic queries.
+Your queries shouldn't be sentences but rather keywords that are SEO friendly and can be used to search the web for information.
+
+For example, if the user is asking about the features of a new technology, you might use queries like "GPT-5.1 features", "GPT-5.1 release date", "GPT-5.1 improvements" rather than a broad query like "Tell me about GPT-5.1".
+
+You can search for 3 queries in one go, make sure to utilize all 3 queries to maximize the information you can gather. If a question is simple, then split your queries to cover different aspects or related topics to get a comprehensive understanding.
+If this tool is present and no other tools are more relevant, you MUST use this tool to get the needed information.
+`;
+
+const balancedModePrompt = `
+Use this tool to perform web searches based on the provided queries. This is useful when you need to gather information from the web to answer the user's questions. You can provide up to 3 queries at a time. You will have to use this every single time if this is present and relevant.
+
+You can call this tool several times if needed to gather enough information.
+Start initially with broader queries to get an overview, then narrow down with more specific queries based on the results you receive.
+
+Your queries shouldn't be sentences but rather keywords that are SEO friendly and can be used to search the web for information.
+
+For example if the user is asking about Tesla, your actions should be like:
+1. __reasoning_preamble "The user is asking about Tesla. I will start with broader queries to get an overview of Tesla, then narrow down with more specific queries based on the results I receive." then
+2. web_search ["Tesla", "Tesla latest news", "Tesla stock price"] then
+3. __reasoning_preamble "Based on the previous search results, I will now narrow down my queries to focus on Tesla's recent developments and stock performance." then
+4. web_search ["Tesla Q2 2025 earnings", "Tesla new model 2025", "Tesla stock analysis"] then done.
+5. __reasoning_preamble "I have gathered enough information to provide a comprehensive answer."
+6. done.
+
+You can search for 3 queries in one go, make sure to utilize all 3 queries to maximize the information you can gather. If a question is simple, then split your queries to cover different aspects or related topics to get a comprehensive understanding.
+If this tool is present and no other tools are more relevant, you MUST use this tool to get the needed information. You can call this tools, multiple times as needed.
+`;
+
+const qualityModePrompt = `
+Use this tool to perform web searches based on the provided queries. This is useful when you need to gather information from the web to answer the user's questions. You can provide up to 3 queries at a time. You will have to use this every single time if this is present and relevant.
+
+You have to call this tool several times to gather enough information unless the question is very simple (like greeting questions or basic facts).
+Start initially with broader queries to get an overview, then narrow down with more specific queries based on the results you receive.
+Never stop before at least 5-6 iterations of searches unless the user question is very simple.
+
+Your queries shouldn't be sentences but rather keywords that are SEO friendly and can be used to search the web for information.
+
+You can search for 3 queries in one go, make sure to utilize all 3 queries to maximize the information you can gather. If a question is simple, then split your queries to cover different aspects or related topics to get a comprehensive understanding.
+If this tool is present and no other tools are more relevant, you MUST use this tool to get the needed information. You can call this tools, multiple times as needed.
+`;
+
+const webSearchAction: ResearchAction = {
+ name: 'web_search',
+ schema: actionSchema,
+ getToolDescription: () =>
+ "Use this tool to perform web searches based on the provided queries. This is useful when you need to gather information from the web to answer the user's questions. You can provide up to 3 queries at a time. You will have to use this every single time if this is present and relevant.",
+ getDescription: (config) => {
+ let prompt = '';
+
+ switch (config.mode) {
+ case 'speed':
+ prompt = speedModePrompt;
+ break;
+ case 'balanced':
+ prompt = balancedModePrompt;
+ break;
+ case 'quality':
+ prompt = qualityModePrompt;
+ break;
+ default:
+ prompt = speedModePrompt;
+ break;
+ }
+
+ return prompt;
+ },
+ enabled: (config) =>
+ config.sources.includes('web') &&
+ config.classification.classification.skipSearch === false,
+ execute: async (input, additionalConfig) => {
+ input.queries = input.queries.slice(0, 3);
+
+ const researchBlock = additionalConfig.session.getBlock(
+ additionalConfig.researchBlockId,
+ );
+
+ if (researchBlock && researchBlock.type === 'research') {
+ researchBlock.data.subSteps.push({
+ id: crypto.randomUUID(),
+ type: 'searching',
+ searching: input.queries,
+ });
+
+ additionalConfig.session.updateBlock(additionalConfig.researchBlockId, [
+ {
+ op: 'replace',
+ path: '/data/subSteps',
+ value: researchBlock.data.subSteps,
+ },
+ ]);
+ }
+
+ const searchResultsBlockId = crypto.randomUUID();
+ let searchResultsEmitted = false;
+
+ let results: Chunk[] = [];
+
+ const search = async (q: string) => {
+ const res = await searchSearxng(q);
+
+ const resultChunks: Chunk[] = res.results.map((r) => ({
+ content: r.content || r.title,
+ metadata: {
+ title: r.title,
+ url: r.url,
+ },
+ }));
+
+ results.push(...resultChunks);
+
+ if (
+ !searchResultsEmitted &&
+ researchBlock &&
+ researchBlock.type === 'research'
+ ) {
+ searchResultsEmitted = true;
+
+ researchBlock.data.subSteps.push({
+ id: searchResultsBlockId,
+ type: 'search_results',
+ reading: resultChunks,
+ });
+
+ additionalConfig.session.updateBlock(additionalConfig.researchBlockId, [
+ {
+ op: 'replace',
+ path: '/data/subSteps',
+ value: researchBlock.data.subSteps,
+ },
+ ]);
+ } else if (
+ searchResultsEmitted &&
+ researchBlock &&
+ researchBlock.type === 'research'
+ ) {
+ const subStepIndex = researchBlock.data.subSteps.findIndex(
+ (step) => step.id === searchResultsBlockId,
+ );
+
+ const subStep = researchBlock.data.subSteps[
+ subStepIndex
+ ] as SearchResultsResearchBlock;
+
+ subStep.reading.push(...resultChunks);
+
+ additionalConfig.session.updateBlock(additionalConfig.researchBlockId, [
+ {
+ op: 'replace',
+ path: '/data/subSteps',
+ value: researchBlock.data.subSteps,
+ },
+ ]);
+ }
+ };
+
+ await Promise.all(input.queries.map(search));
+
+ return {
+ type: 'search_results',
+ results,
+ };
+ },
+};
+
+export default webSearchAction;
diff --git a/src/lib/agents/search/researcher/index.ts b/src/lib/agents/search/researcher/index.ts
new file mode 100644
index 0000000..d653281
--- /dev/null
+++ b/src/lib/agents/search/researcher/index.ts
@@ -0,0 +1,222 @@
+import { ActionOutput, ResearcherInput, ResearcherOutput } from '../types';
+import { ActionRegistry } from './actions';
+import { getResearcherPrompt } from '@/lib/prompts/search/researcher';
+import SessionManager from '@/lib/session';
+import { Message, ReasoningResearchBlock } from '@/lib/types';
+import formatChatHistoryAsString from '@/lib/utils/formatHistory';
+import { ToolCall } from '@/lib/models/types';
+
+class Researcher {
+ async research(
+ session: SessionManager,
+ input: ResearcherInput,
+ ): Promise {
+ let actionOutput: ActionOutput[] = [];
+ let maxIteration =
+ input.config.mode === 'speed'
+ ? 2
+ : input.config.mode === 'balanced'
+ ? 6
+ : 25;
+
+ const availableTools = ActionRegistry.getAvailableActionTools({
+ classification: input.classification,
+ fileIds: input.config.fileIds,
+ mode: input.config.mode,
+ sources: input.config.sources,
+ });
+
+ const availableActionsDescription =
+ ActionRegistry.getAvailableActionsDescriptions({
+ classification: input.classification,
+ fileIds: input.config.fileIds,
+ mode: input.config.mode,
+ sources: input.config.sources,
+ });
+
+ const researchBlockId = crypto.randomUUID();
+
+ session.emitBlock({
+ id: researchBlockId,
+ type: 'research',
+ data: {
+ subSteps: [],
+ },
+ });
+
+ const agentMessageHistory: Message[] = [
+ {
+ role: 'user',
+ content: `
+
+ ${formatChatHistoryAsString(input.chatHistory.slice(-10))}
+ User: ${input.followUp} (Standalone question: ${input.classification.standaloneFollowUp})
+
+ `,
+ },
+ ];
+
+ for (let i = 0; i < maxIteration; i++) {
+ const researcherPrompt = getResearcherPrompt(
+ availableActionsDescription,
+ input.config.mode,
+ i,
+ maxIteration,
+ input.config.fileIds,
+ );
+
+ const actionStream = input.config.llm.streamText({
+ messages: [
+ {
+ role: 'system',
+ content: researcherPrompt,
+ },
+ ...agentMessageHistory,
+ ],
+ tools: availableTools,
+ });
+
+ const block = session.getBlock(researchBlockId);
+
+ let reasoningEmitted = false;
+ let reasoningId = crypto.randomUUID();
+
+ let finalToolCalls: ToolCall[] = [];
+
+ for await (const partialRes of actionStream) {
+ if (partialRes.toolCallChunk.length > 0) {
+ partialRes.toolCallChunk.forEach((tc) => {
+ if (
+ tc.name === '__reasoning_preamble' &&
+ tc.arguments['plan'] &&
+ !reasoningEmitted &&
+ block &&
+ block.type === 'research'
+ ) {
+ reasoningEmitted = true;
+
+ block.data.subSteps.push({
+ id: reasoningId,
+ type: 'reasoning',
+ reasoning: tc.arguments['plan'],
+ });
+
+ session.updateBlock(researchBlockId, [
+ {
+ op: 'replace',
+ path: '/data/subSteps',
+ value: block.data.subSteps,
+ },
+ ]);
+ } else if (
+ tc.name === '__reasoning_preamble' &&
+ tc.arguments['plan'] &&
+ reasoningEmitted &&
+ block &&
+ block.type === 'research'
+ ) {
+ const subStepIndex = block.data.subSteps.findIndex(
+ (step: any) => step.id === reasoningId,
+ );
+
+ if (subStepIndex !== -1) {
+ const subStep = block.data.subSteps[
+ subStepIndex
+ ] as ReasoningResearchBlock;
+ subStep.reasoning = tc.arguments['plan'];
+ session.updateBlock(researchBlockId, [
+ {
+ op: 'replace',
+ path: '/data/subSteps',
+ value: block.data.subSteps,
+ },
+ ]);
+ }
+ }
+
+ const existingIndex = finalToolCalls.findIndex(
+ (ftc) => ftc.id === tc.id,
+ );
+
+ if (existingIndex !== -1) {
+ finalToolCalls[existingIndex].arguments = tc.arguments;
+ } else {
+ finalToolCalls.push(tc);
+ }
+ });
+ }
+ }
+
+ if (finalToolCalls.length === 0) {
+ break;
+ }
+
+ if (finalToolCalls[finalToolCalls.length - 1].name === 'done') {
+ break;
+ }
+
+ agentMessageHistory.push({
+ role: 'assistant',
+ content: '',
+ tool_calls: finalToolCalls,
+ });
+
+ const actionResults = await ActionRegistry.executeAll(finalToolCalls, {
+ llm: input.config.llm,
+ embedding: input.config.embedding,
+ session: session,
+ researchBlockId: researchBlockId,
+ fileIds: input.config.fileIds,
+ });
+
+ actionOutput.push(...actionResults);
+
+ actionResults.forEach((action, i) => {
+ agentMessageHistory.push({
+ role: 'tool',
+ id: finalToolCalls[i].id,
+ name: finalToolCalls[i].name,
+ content: JSON.stringify(action),
+ });
+ });
+ }
+
+ const searchResults = actionOutput
+ .filter((a) => a.type === 'search_results')
+ .flatMap((a) => a.results);
+
+ const seenUrls = new Map();
+
+ const filteredSearchResults = searchResults
+ .map((result, index) => {
+ if (result.metadata.url && !seenUrls.has(result.metadata.url)) {
+ seenUrls.set(result.metadata.url, index);
+ return result;
+ } else if (result.metadata.url && seenUrls.has(result.metadata.url)) {
+ const existingIndex = seenUrls.get(result.metadata.url)!;
+
+ const existingResult = searchResults[existingIndex];
+
+ existingResult.content += `\n\n${result.content}`;
+
+ return undefined;
+ }
+
+ return result;
+ })
+ .filter((r) => r !== undefined);
+
+ session.emitBlock({
+ id: crypto.randomUUID(),
+ type: 'source',
+ data: filteredSearchResults,
+ });
+
+ return {
+ findings: actionOutput,
+ searchFindings: filteredSearchResults,
+ };
+ }
+}
+
+export default Researcher;
diff --git a/src/lib/agents/search/types.ts b/src/lib/agents/search/types.ts
new file mode 100644
index 0000000..64c967e
--- /dev/null
+++ b/src/lib/agents/search/types.ts
@@ -0,0 +1,122 @@
+import z from 'zod';
+import BaseLLM from '../../models/base/llm';
+import BaseEmbedding from '@/lib/models/base/embedding';
+import SessionManager from '@/lib/session';
+import { ChatTurnMessage, Chunk } from '@/lib/types';
+
+export type SearchSources = 'web' | 'discussions' | 'academic';
+
+export type SearchAgentConfig = {
+ sources: SearchSources[];
+ fileIds: string[];
+ llm: BaseLLM;
+ embedding: BaseEmbedding;
+ mode: 'speed' | 'balanced' | 'quality';
+ systemInstructions: string;
+};
+
+export type SearchAgentInput = {
+ chatHistory: ChatTurnMessage[];
+ followUp: string;
+ config: SearchAgentConfig;
+ chatId: string;
+ messageId: string;
+};
+
+export type WidgetInput = {
+ chatHistory: ChatTurnMessage[];
+ followUp: string;
+ classification: ClassifierOutput;
+ llm: BaseLLM;
+};
+
+export type Widget = {
+ type: string;
+ shouldExecute: (classification: ClassifierOutput) => boolean;
+ execute: (input: WidgetInput) => Promise;
+};
+
+export type WidgetOutput = {
+ type: string;
+ llmContext: string;
+ data: any;
+};
+
+export type ClassifierInput = {
+ llm: BaseLLM;
+ enabledSources: SearchSources[];
+ query: string;
+ chatHistory: ChatTurnMessage[];
+};
+
+export type ClassifierOutput = {
+ classification: {
+ skipSearch: boolean;
+ personalSearch: boolean;
+ academicSearch: boolean;
+ discussionSearch: boolean;
+ showWeatherWidget: boolean;
+ showStockWidget: boolean;
+ showCalculationWidget: boolean;
+ };
+ standaloneFollowUp: string;
+};
+
+export type AdditionalConfig = {
+ llm: BaseLLM;
+ embedding: BaseEmbedding;
+ session: SessionManager;
+};
+
+export type ResearcherInput = {
+ chatHistory: ChatTurnMessage[];
+ followUp: string;
+ classification: ClassifierOutput;
+ config: SearchAgentConfig;
+};
+
+export type ResearcherOutput = {
+ findings: ActionOutput[];
+ searchFindings: Chunk[];
+};
+
+export type SearchActionOutput = {
+ type: 'search_results';
+ results: Chunk[];
+};
+
+export type DoneActionOutput = {
+ type: 'done';
+};
+
+export type ReasoningResearchAction = {
+ type: 'reasoning';
+ reasoning: string;
+};
+
+export type ActionOutput =
+ | SearchActionOutput
+ | DoneActionOutput
+ | ReasoningResearchAction;
+
+export interface ResearchAction<
+ TSchema extends z.ZodObject = z.ZodObject,
+> {
+ name: string;
+ schema: z.ZodObject;
+ getToolDescription: (config: { mode: SearchAgentConfig['mode'] }) => string;
+ getDescription: (config: { mode: SearchAgentConfig['mode'] }) => string;
+ enabled: (config: {
+ classification: ClassifierOutput;
+ fileIds: string[];
+ mode: SearchAgentConfig['mode'];
+ sources: SearchSources[];
+ }) => boolean;
+ execute: (
+ params: z.infer,
+ additionalConfig: AdditionalConfig & {
+ researchBlockId: string;
+ fileIds: string[];
+ },
+ ) => Promise;
+}
diff --git a/src/lib/agents/search/widgets/calculationWidget.ts b/src/lib/agents/search/widgets/calculationWidget.ts
new file mode 100644
index 0000000..3e28015
--- /dev/null
+++ b/src/lib/agents/search/widgets/calculationWidget.ts
@@ -0,0 +1,71 @@
+import z from 'zod';
+import { Widget } from '../types';
+import formatChatHistoryAsString from '@/lib/utils/formatHistory';
+import { exp, evaluate as mathEval } from 'mathjs';
+
+const schema = z.object({
+ expression: z
+ .string()
+ .describe('Mathematical expression to calculate or evaluate.'),
+ notPresent: z
+ .boolean()
+ .describe('Whether there is any need for the calculation widget.'),
+});
+
+const system = `
+
+Assistant is a calculation expression extractor. You will recieve a user follow up and a conversation history.
+Your task is to determine if there is a mathematical expression that needs to be calculated or evaluated. If there is, extract the expression and return it. If there is no need for any calculation, set notPresent to true.
+
+
+
+Make sure that the extracted expression is valid and can be used to calculate the result with Math JS library (https://mathjs.org/). If the expression is not valid, set notPresent to true.
+If you feel like you cannot extract a valid expression, set notPresent to true.
+
+
+
+You must respond in the following JSON format without any extra text, explanations or filler sentences:
+{
+ "expression": string,
+ "notPresent": boolean
+}
+
+`;
+
+const calculationWidget: Widget = {
+ type: 'calculationWidget',
+ shouldExecute: (classification) =>
+ classification.classification.showCalculationWidget,
+ execute: async (input) => {
+ const output = await input.llm.generateObject({
+ messages: [
+ {
+ role: 'system',
+ content: system,
+ },
+ {
+ role: 'user',
+ content: `\n${formatChatHistoryAsString(input.chatHistory)}\n \n\n${input.followUp}\n `,
+ },
+ ],
+ schema,
+ });
+
+ if (output.notPresent) {
+ return;
+ }
+
+ const result = mathEval(output.expression);
+
+ return {
+ type: 'calculation_result',
+ llmContext: `The result of the calculation for the expression "${output.expression}" is: ${result}`,
+ data: {
+ expression: output.expression,
+ result,
+ },
+ };
+ },
+};
+
+export default calculationWidget;
diff --git a/src/lib/agents/search/widgets/executor.ts b/src/lib/agents/search/widgets/executor.ts
new file mode 100644
index 0000000..89f1830
--- /dev/null
+++ b/src/lib/agents/search/widgets/executor.ts
@@ -0,0 +1,36 @@
+import { Widget, WidgetInput, WidgetOutput } from '../types';
+
+class WidgetExecutor {
+ static widgets = new Map();
+
+ static register(widget: Widget) {
+ this.widgets.set(widget.type, widget);
+ }
+
+ static getWidget(type: string): Widget | undefined {
+ return this.widgets.get(type);
+ }
+
+ static async executeAll(input: WidgetInput): Promise {
+ const results: WidgetOutput[] = [];
+
+ await Promise.all(
+ Array.from(this.widgets.values()).map(async (widget) => {
+ try {
+ if (widget.shouldExecute(input.classification)) {
+ const output = await widget.execute(input);
+ if (output) {
+ results.push(output);
+ }
+ }
+ } catch (e) {
+ console.log(`Error executing widget ${widget.type}:`, e);
+ }
+ }),
+ );
+
+ return results;
+ }
+}
+
+export default WidgetExecutor;
diff --git a/src/lib/agents/search/widgets/index.ts b/src/lib/agents/search/widgets/index.ts
new file mode 100644
index 0000000..9958b0d
--- /dev/null
+++ b/src/lib/agents/search/widgets/index.ts
@@ -0,0 +1,10 @@
+import calculationWidget from './calculationWidget';
+import WidgetExecutor from './executor';
+import weatherWidget from './weatherWidget';
+import stockWidget from './stockWidget';
+
+WidgetExecutor.register(weatherWidget);
+WidgetExecutor.register(calculationWidget);
+WidgetExecutor.register(stockWidget);
+
+export { WidgetExecutor };
diff --git a/src/lib/agents/search/widgets/stockWidget.ts b/src/lib/agents/search/widgets/stockWidget.ts
new file mode 100644
index 0000000..4ac2059
--- /dev/null
+++ b/src/lib/agents/search/widgets/stockWidget.ts
@@ -0,0 +1,434 @@
+import z from 'zod';
+import { Widget } from '../types';
+import YahooFinance from 'yahoo-finance2';
+import formatChatHistoryAsString from '@/lib/utils/formatHistory';
+
+const yf = new YahooFinance({
+ suppressNotices: ['yahooSurvey'],
+});
+
+const schema = z.object({
+ name: z
+ .string()
+ .describe(
+ "The stock name for example Nvidia, Google, Apple, Microsoft etc. You can also return ticker if you're aware of it otherwise just use the name.",
+ ),
+ comparisonNames: z
+ .array(z.string())
+ .max(3)
+ .describe(
+ "Optional array of up to 3 stock names to compare against the base name (e.g., ['Microsoft', 'GOOGL', 'Meta']). Charts will show percentage change comparison.",
+ ),
+ notPresent: z
+ .boolean()
+ .describe('Whether there is no need for the stock widget.'),
+});
+
+const systemPrompt = `
+
+You are a stock ticker/name extractor. You will receive a user follow up and a conversation history.
+Your task is to determine if the user is asking about stock information and extract the stock name(s) they want data for.
+
+
+
+- If the user is asking about a stock, extract the primary stock name or ticker.
+- If the user wants to compare stocks, extract up to 3 comparison stock names in comparisonNames.
+- You can use either stock names (e.g., "Nvidia", "Apple") or tickers (e.g., "NVDA", "AAPL").
+- If you cannot determine a valid stock or the query is not stock-related, set notPresent to true.
+- If no comparison is needed, set comparisonNames to an empty array.
+
+
+
+You must respond in the following JSON format without any extra text, explanations or filler sentences:
+{
+ "name": string,
+ "comparisonNames": string[],
+ "notPresent": boolean
+}
+
+`;
+
+const stockWidget: Widget = {
+ type: 'stockWidget',
+ shouldExecute: (classification) =>
+ classification.classification.showStockWidget,
+ execute: async (input) => {
+ const output = await input.llm.generateObject({
+ messages: [
+ {
+ role: 'system',
+ content: systemPrompt,
+ },
+ {
+ role: 'user',
+ content: `\n${formatChatHistoryAsString(input.chatHistory)}\n \n\n${input.followUp}\n `,
+ },
+ ],
+ schema,
+ });
+
+ if (output.notPresent) {
+ return;
+ }
+
+ const params = output;
+ try {
+ const name = params.name;
+
+ const findings = await yf.search(name);
+
+ if (findings.quotes.length === 0)
+ throw new Error(`Failed to find quote for name/symbol: ${name}`);
+
+ const ticker = findings.quotes[0].symbol as string;
+
+ const quote: any = await yf.quote(ticker);
+
+ const chartPromises = {
+ '1D': yf
+ .chart(ticker, {
+ period1: new Date(Date.now() - 2 * 24 * 60 * 60 * 1000),
+ period2: new Date(),
+ interval: '5m',
+ })
+ .catch(() => null),
+ '5D': yf
+ .chart(ticker, {
+ period1: new Date(Date.now() - 6 * 24 * 60 * 60 * 1000),
+ period2: new Date(),
+ interval: '15m',
+ })
+ .catch(() => null),
+ '1M': yf
+ .chart(ticker, {
+ period1: new Date(Date.now() - 30 * 24 * 60 * 60 * 1000),
+ interval: '1d',
+ })
+ .catch(() => null),
+ '3M': yf
+ .chart(ticker, {
+ period1: new Date(Date.now() - 90 * 24 * 60 * 60 * 1000),
+ interval: '1d',
+ })
+ .catch(() => null),
+ '6M': yf
+ .chart(ticker, {
+ period1: new Date(Date.now() - 180 * 24 * 60 * 60 * 1000),
+ interval: '1d',
+ })
+ .catch(() => null),
+ '1Y': yf
+ .chart(ticker, {
+ period1: new Date(Date.now() - 365 * 24 * 60 * 60 * 1000),
+ interval: '1d',
+ })
+ .catch(() => null),
+ MAX: yf
+ .chart(ticker, {
+ period1: new Date(Date.now() - 10 * 365 * 24 * 60 * 60 * 1000),
+ interval: '1wk',
+ })
+ .catch(() => null),
+ };
+
+ const charts = await Promise.all([
+ chartPromises['1D'],
+ chartPromises['5D'],
+ chartPromises['1M'],
+ chartPromises['3M'],
+ chartPromises['6M'],
+ chartPromises['1Y'],
+ chartPromises['MAX'],
+ ]);
+
+ const [chart1D, chart5D, chart1M, chart3M, chart6M, chart1Y, chartMAX] =
+ charts;
+
+ if (!quote) {
+ throw new Error(`No data found for ticker: ${ticker}`);
+ }
+
+ let comparisonData: any = null;
+ if (params.comparisonNames.length > 0) {
+ const comparisonPromises = params.comparisonNames
+ .slice(0, 3)
+ .map(async (compName) => {
+ try {
+ const compFindings = await yf.search(compName);
+
+ if (compFindings.quotes.length === 0) return null;
+
+ const compTicker = compFindings.quotes[0].symbol as string;
+ const compQuote = await yf.quote(compTicker);
+ const compCharts = await Promise.all([
+ yf
+ .chart(compTicker, {
+ period1: new Date(Date.now() - 2 * 24 * 60 * 60 * 1000),
+ period2: new Date(),
+ interval: '5m',
+ })
+ .catch(() => null),
+ yf
+ .chart(compTicker, {
+ period1: new Date(Date.now() - 6 * 24 * 60 * 60 * 1000),
+ period2: new Date(),
+ interval: '15m',
+ })
+ .catch(() => null),
+ yf
+ .chart(compTicker, {
+ period1: new Date(Date.now() - 30 * 24 * 60 * 60 * 1000),
+ interval: '1d',
+ })
+ .catch(() => null),
+ yf
+ .chart(compTicker, {
+ period1: new Date(Date.now() - 90 * 24 * 60 * 60 * 1000),
+ interval: '1d',
+ })
+ .catch(() => null),
+ yf
+ .chart(compTicker, {
+ period1: new Date(Date.now() - 180 * 24 * 60 * 60 * 1000),
+ interval: '1d',
+ })
+ .catch(() => null),
+ yf
+ .chart(compTicker, {
+ period1: new Date(Date.now() - 365 * 24 * 60 * 60 * 1000),
+ interval: '1d',
+ })
+ .catch(() => null),
+ yf
+ .chart(compTicker, {
+ period1: new Date(
+ Date.now() - 10 * 365 * 24 * 60 * 60 * 1000,
+ ),
+ interval: '1wk',
+ })
+ .catch(() => null),
+ ]);
+ return {
+ ticker: compTicker,
+ name: compQuote.shortName || compTicker,
+ charts: compCharts,
+ };
+ } catch (error) {
+ console.error(
+ `Failed to fetch comparison ticker ${compName}:`,
+ error,
+ );
+ return null;
+ }
+ });
+ const compResults = await Promise.all(comparisonPromises);
+ comparisonData = compResults.filter((r) => r !== null);
+ }
+
+ const stockData = {
+ symbol: quote.symbol,
+ shortName: quote.shortName || quote.longName || ticker,
+ longName: quote.longName,
+ exchange: quote.fullExchangeName || quote.exchange,
+ currency: quote.currency,
+ quoteType: quote.quoteType,
+
+ marketState: quote.marketState,
+ regularMarketTime: quote.regularMarketTime,
+ postMarketTime: quote.postMarketTime,
+ preMarketTime: quote.preMarketTime,
+
+ regularMarketPrice: quote.regularMarketPrice,
+ regularMarketChange: quote.regularMarketChange,
+ regularMarketChangePercent: quote.regularMarketChangePercent,
+ regularMarketPreviousClose: quote.regularMarketPreviousClose,
+ regularMarketOpen: quote.regularMarketOpen,
+ regularMarketDayHigh: quote.regularMarketDayHigh,
+ regularMarketDayLow: quote.regularMarketDayLow,
+
+ postMarketPrice: quote.postMarketPrice,
+ postMarketChange: quote.postMarketChange,
+ postMarketChangePercent: quote.postMarketChangePercent,
+ preMarketPrice: quote.preMarketPrice,
+ preMarketChange: quote.preMarketChange,
+ preMarketChangePercent: quote.preMarketChangePercent,
+
+ regularMarketVolume: quote.regularMarketVolume,
+ averageDailyVolume3Month: quote.averageDailyVolume3Month,
+ averageDailyVolume10Day: quote.averageDailyVolume10Day,
+ bid: quote.bid,
+ bidSize: quote.bidSize,
+ ask: quote.ask,
+ askSize: quote.askSize,
+
+ fiftyTwoWeekLow: quote.fiftyTwoWeekLow,
+ fiftyTwoWeekHigh: quote.fiftyTwoWeekHigh,
+ fiftyTwoWeekChange: quote.fiftyTwoWeekChange,
+ fiftyTwoWeekChangePercent: quote.fiftyTwoWeekChangePercent,
+
+ marketCap: quote.marketCap,
+ trailingPE: quote.trailingPE,
+ forwardPE: quote.forwardPE,
+ priceToBook: quote.priceToBook,
+ bookValue: quote.bookValue,
+ earningsPerShare: quote.epsTrailingTwelveMonths,
+ epsForward: quote.epsForward,
+
+ dividendRate: quote.dividendRate,
+ dividendYield: quote.dividendYield,
+ exDividendDate: quote.exDividendDate,
+ trailingAnnualDividendRate: quote.trailingAnnualDividendRate,
+ trailingAnnualDividendYield: quote.trailingAnnualDividendYield,
+
+ beta: quote.beta,
+
+ fiftyDayAverage: quote.fiftyDayAverage,
+ fiftyDayAverageChange: quote.fiftyDayAverageChange,
+ fiftyDayAverageChangePercent: quote.fiftyDayAverageChangePercent,
+ twoHundredDayAverage: quote.twoHundredDayAverage,
+ twoHundredDayAverageChange: quote.twoHundredDayAverageChange,
+ twoHundredDayAverageChangePercent:
+ quote.twoHundredDayAverageChangePercent,
+
+ sector: quote.sector,
+ industry: quote.industry,
+ website: quote.website,
+
+ chartData: {
+ '1D': chart1D
+ ? {
+ timestamps: chart1D.quotes.map((q: any) => q.date.getTime()),
+ prices: chart1D.quotes.map((q: any) => q.close),
+ }
+ : null,
+ '5D': chart5D
+ ? {
+ timestamps: chart5D.quotes.map((q: any) => q.date.getTime()),
+ prices: chart5D.quotes.map((q: any) => q.close),
+ }
+ : null,
+ '1M': chart1M
+ ? {
+ timestamps: chart1M.quotes.map((q: any) => q.date.getTime()),
+ prices: chart1M.quotes.map((q: any) => q.close),
+ }
+ : null,
+ '3M': chart3M
+ ? {
+ timestamps: chart3M.quotes.map((q: any) => q.date.getTime()),
+ prices: chart3M.quotes.map((q: any) => q.close),
+ }
+ : null,
+ '6M': chart6M
+ ? {
+ timestamps: chart6M.quotes.map((q: any) => q.date.getTime()),
+ prices: chart6M.quotes.map((q: any) => q.close),
+ }
+ : null,
+ '1Y': chart1Y
+ ? {
+ timestamps: chart1Y.quotes.map((q: any) => q.date.getTime()),
+ prices: chart1Y.quotes.map((q: any) => q.close),
+ }
+ : null,
+ MAX: chartMAX
+ ? {
+ timestamps: chartMAX.quotes.map((q: any) => q.date.getTime()),
+ prices: chartMAX.quotes.map((q: any) => q.close),
+ }
+ : null,
+ },
+ comparisonData: comparisonData
+ ? comparisonData.map((comp: any) => ({
+ ticker: comp.ticker,
+ name: comp.name,
+ chartData: {
+ '1D': comp.charts[0]
+ ? {
+ timestamps: comp.charts[0].quotes.map((q: any) =>
+ q.date.getTime(),
+ ),
+ prices: comp.charts[0].quotes.map((q: any) => q.close),
+ }
+ : null,
+ '5D': comp.charts[1]
+ ? {
+ timestamps: comp.charts[1].quotes.map((q: any) =>
+ q.date.getTime(),
+ ),
+ prices: comp.charts[1].quotes.map((q: any) => q.close),
+ }
+ : null,
+ '1M': comp.charts[2]
+ ? {
+ timestamps: comp.charts[2].quotes.map((q: any) =>
+ q.date.getTime(),
+ ),
+ prices: comp.charts[2].quotes.map((q: any) => q.close),
+ }
+ : null,
+ '3M': comp.charts[3]
+ ? {
+ timestamps: comp.charts[3].quotes.map((q: any) =>
+ q.date.getTime(),
+ ),
+ prices: comp.charts[3].quotes.map((q: any) => q.close),
+ }
+ : null,
+ '6M': comp.charts[4]
+ ? {
+ timestamps: comp.charts[4].quotes.map((q: any) =>
+ q.date.getTime(),
+ ),
+ prices: comp.charts[4].quotes.map((q: any) => q.close),
+ }
+ : null,
+ '1Y': comp.charts[5]
+ ? {
+ timestamps: comp.charts[5].quotes.map((q: any) =>
+ q.date.getTime(),
+ ),
+ prices: comp.charts[5].quotes.map((q: any) => q.close),
+ }
+ : null,
+ MAX: comp.charts[6]
+ ? {
+ timestamps: comp.charts[6].quotes.map((q: any) =>
+ q.date.getTime(),
+ ),
+ prices: comp.charts[6].quotes.map((q: any) => q.close),
+ }
+ : null,
+ },
+ }))
+ : null,
+ };
+
+ return {
+ type: 'stock',
+ llmContext: `Current price of ${stockData.shortName} (${stockData.symbol}) is ${stockData.regularMarketPrice} ${stockData.currency}. Other details: ${JSON.stringify(
+ {
+ marketState: stockData.marketState,
+ regularMarketChange: stockData.regularMarketChange,
+ regularMarketChangePercent: stockData.regularMarketChangePercent,
+ marketCap: stockData.marketCap,
+ peRatio: stockData.trailingPE,
+ dividendYield: stockData.dividendYield,
+ },
+ )}`,
+ data: stockData,
+ };
+ } catch (error: any) {
+ return {
+ type: 'stock',
+ llmContext: 'Failed to fetch stock data.',
+ data: {
+ error: `Error fetching stock data: ${error.message || error}`,
+ ticker: params.name,
+ },
+ };
+ }
+ },
+};
+
+export default stockWidget;
diff --git a/src/lib/agents/search/widgets/weatherWidget.ts b/src/lib/agents/search/widgets/weatherWidget.ts
new file mode 100644
index 0000000..4739324
--- /dev/null
+++ b/src/lib/agents/search/widgets/weatherWidget.ts
@@ -0,0 +1,203 @@
+import z from 'zod';
+import { Widget } from '../types';
+import formatChatHistoryAsString from '@/lib/utils/formatHistory';
+
+const schema = z.object({
+ location: z
+ .string()
+ .describe(
+ 'Human-readable location name (e.g., "New York, NY, USA", "London, UK"). Use this OR lat/lon coordinates, never both. Leave empty string if providing coordinates.',
+ ),
+ lat: z
+ .number()
+ .describe(
+ 'Latitude coordinate in decimal degrees (e.g., 40.7128). Only use when location name is empty.',
+ ),
+ lon: z
+ .number()
+ .describe(
+ 'Longitude coordinate in decimal degrees (e.g., -74.0060). Only use when location name is empty.',
+ ),
+ notPresent: z
+ .boolean()
+ .describe('Whether there is no need for the weather widget.'),
+});
+
+const systemPrompt = `
+
+You are a location extractor for weather queries. You will receive a user follow up and a conversation history.
+Your task is to determine if the user is asking about weather and extract the location they want weather for.
+
+
+
+- If the user is asking about weather, extract the location name OR coordinates (never both).
+- If using location name, set lat and lon to 0.
+- If using coordinates, set location to empty string.
+- If you cannot determine a valid location or the query is not weather-related, set notPresent to true.
+- Location should be specific (city, state/region, country) for best results.
+- You have to give the location so that it can be used to fetch weather data, it cannot be left empty unless notPresent is true.
+- Make sure to infer short forms of location names (e.g., "NYC" -> "New York City", "LA" -> "Los Angeles").
+
+
+
+You must respond in the following JSON format without any extra text, explanations or filler sentences:
+{
+ "location": string,
+ "lat": number,
+ "lon": number,
+ "notPresent": boolean
+}
+
+`;
+
+const weatherWidget: Widget = {
+ type: 'weatherWidget',
+ shouldExecute: (classification) =>
+ classification.classification.showWeatherWidget,
+ execute: async (input) => {
+ const output = await input.llm.generateObject({
+ messages: [
+ {
+ role: 'system',
+ content: systemPrompt,
+ },
+ {
+ role: 'user',
+ content: `\n${formatChatHistoryAsString(input.chatHistory)}\n \n\n${input.followUp}\n `,
+ },
+ ],
+ schema,
+ });
+
+ if (output.notPresent) {
+ return;
+ }
+
+ const params = output;
+
+ try {
+ if (
+ params.location === '' &&
+ (params.lat === undefined || params.lon === undefined)
+ ) {
+ throw new Error(
+ 'Either location name or both latitude and longitude must be provided.',
+ );
+ }
+
+ if (params.location !== '') {
+ const openStreetMapUrl = `https://nominatim.openstreetmap.org/search?q=${encodeURIComponent(params.location)}&format=json&limit=1`;
+
+ const locationRes = await fetch(openStreetMapUrl, {
+ headers: {
+ 'User-Agent': 'Perplexica',
+ 'Content-Type': 'application/json',
+ },
+ });
+
+ const data = await locationRes.json();
+
+ const location = data[0];
+
+ if (!location) {
+ throw new Error(
+ `Could not find coordinates for location: ${params.location}`,
+ );
+ }
+
+ const weatherRes = await fetch(
+ `https://api.open-meteo.com/v1/forecast?latitude=${location.lat}&longitude=${location.lon}¤t=temperature_2m,relative_humidity_2m,apparent_temperature,is_day,precipitation,rain,showers,snowfall,weather_code,cloud_cover,pressure_msl,surface_pressure,wind_speed_10m,wind_direction_10m,wind_gusts_10m&hourly=temperature_2m,precipitation_probability,precipitation,weather_code&daily=weather_code,temperature_2m_max,temperature_2m_min,precipitation_sum,precipitation_probability_max&timezone=auto&forecast_days=7`,
+ {
+ headers: {
+ 'User-Agent': 'Perplexica',
+ 'Content-Type': 'application/json',
+ },
+ },
+ );
+
+ const weatherData = await weatherRes.json();
+
+ return {
+ type: 'weather',
+ llmContext: `Weather in ${params.location} is ${JSON.stringify(weatherData.current)}`,
+ data: {
+ location: params.location,
+ latitude: location.lat,
+ longitude: location.lon,
+ current: weatherData.current,
+ hourly: {
+ time: weatherData.hourly.time.slice(0, 24),
+ temperature_2m: weatherData.hourly.temperature_2m.slice(0, 24),
+ precipitation_probability:
+ weatherData.hourly.precipitation_probability.slice(0, 24),
+ precipitation: weatherData.hourly.precipitation.slice(0, 24),
+ weather_code: weatherData.hourly.weather_code.slice(0, 24),
+ },
+ daily: weatherData.daily,
+ timezone: weatherData.timezone,
+ },
+ };
+ } else if (params.lat !== undefined && params.lon !== undefined) {
+ const [weatherRes, locationRes] = await Promise.all([
+ fetch(
+ `https://api.open-meteo.com/v1/forecast?latitude=${params.lat}&longitude=${params.lon}¤t=temperature_2m,relative_humidity_2m,apparent_temperature,is_day,precipitation,rain,showers,snowfall,weather_code,cloud_cover,pressure_msl,surface_pressure,wind_speed_10m,wind_direction_10m,wind_gusts_10m&hourly=temperature_2m,precipitation_probability,precipitation,weather_code&daily=weather_code,temperature_2m_max,temperature_2m_min,precipitation_sum,precipitation_probability_max&timezone=auto&forecast_days=7`,
+ {
+ headers: {
+ 'User-Agent': 'Perplexica',
+ 'Content-Type': 'application/json',
+ },
+ },
+ ),
+ fetch(
+ `https://nominatim.openstreetmap.org/reverse?lat=${params.lat}&lon=${params.lon}&format=json`,
+ {
+ headers: {
+ 'User-Agent': 'Perplexica',
+ 'Content-Type': 'application/json',
+ },
+ },
+ ),
+ ]);
+
+ const weatherData = await weatherRes.json();
+ const locationData = await locationRes.json();
+
+ return {
+ type: 'weather',
+ llmContext: `Weather in ${locationData.display_name} is ${JSON.stringify(weatherData.current)}`,
+ data: {
+ location: locationData.display_name,
+ latitude: params.lat,
+ longitude: params.lon,
+ current: weatherData.current,
+ hourly: {
+ time: weatherData.hourly.time.slice(0, 24),
+ temperature_2m: weatherData.hourly.temperature_2m.slice(0, 24),
+ precipitation_probability:
+ weatherData.hourly.precipitation_probability.slice(0, 24),
+ precipitation: weatherData.hourly.precipitation.slice(0, 24),
+ weather_code: weatherData.hourly.weather_code.slice(0, 24),
+ },
+ daily: weatherData.daily,
+ timezone: weatherData.timezone,
+ },
+ };
+ }
+
+ return {
+ type: 'weather',
+ llmContext: 'No valid location or coordinates provided.',
+ data: null,
+ };
+ } catch (err) {
+ return {
+ type: 'weather',
+ llmContext: 'Failed to fetch weather data.',
+ data: {
+ error: `Error fetching weather data: ${err}`,
+ },
+ };
+ }
+ },
+};
+export default weatherWidget;
diff --git a/src/lib/agents/suggestions/index.ts b/src/lib/agents/suggestions/index.ts
new file mode 100644
index 0000000..15b3598
--- /dev/null
+++ b/src/lib/agents/suggestions/index.ts
@@ -0,0 +1,39 @@
+import formatChatHistoryAsString from '@/lib/utils/formatHistory';
+import { suggestionGeneratorPrompt } from '@/lib/prompts/suggestions';
+import { ChatTurnMessage } from '@/lib/types';
+import z from 'zod';
+import BaseLLM from '@/lib/models/base/llm';
+import { i } from 'mathjs';
+
+type SuggestionGeneratorInput = {
+ chatHistory: ChatTurnMessage[];
+};
+
+const schema = z.object({
+ suggestions: z
+ .array(z.string())
+ .describe('List of suggested questions or prompts'),
+});
+
+const generateSuggestions = async (
+ input: SuggestionGeneratorInput,
+ llm: BaseLLM,
+) => {
+ const res = await llm.generateObject({
+ messages: [
+ {
+ role: 'system',
+ content: suggestionGeneratorPrompt,
+ },
+ {
+ role: 'user',
+ content: `\n${formatChatHistoryAsString(input.chatHistory)}\n `,
+ },
+ ],
+ schema,
+ });
+
+ return res.suggestions;
+};
+
+export default generateSuggestions;
diff --git a/src/lib/chains/imageSearchAgent.ts b/src/lib/chains/imageSearchAgent.ts
deleted file mode 100644
index a91b7bb..0000000
--- a/src/lib/chains/imageSearchAgent.ts
+++ /dev/null
@@ -1,105 +0,0 @@
-import {
- RunnableSequence,
- RunnableMap,
- RunnableLambda,
-} from '@langchain/core/runnables';
-import { ChatPromptTemplate } from '@langchain/core/prompts';
-import formatChatHistoryAsString from '../utils/formatHistory';
-import { BaseMessage } from '@langchain/core/messages';
-import { StringOutputParser } from '@langchain/core/output_parsers';
-import { searchSearxng } from '../searxng';
-import type { BaseChatModel } from '@langchain/core/language_models/chat_models';
-import LineOutputParser from '../outputParsers/lineOutputParser';
-
-const imageSearchChainPrompt = `
-You will be given a conversation below and a follow up question. You need to rephrase the follow-up question so it is a standalone question that can be used by the LLM to search the web for images.
-You need to make sure the rephrased question agrees with the conversation and is relevant to the conversation.
-Output only the rephrased query wrapped in an XML element. Do not include any explanation or additional text.
-`;
-
-type ImageSearchChainInput = {
- chat_history: BaseMessage[];
- query: string;
-};
-
-interface ImageSearchResult {
- img_src: string;
- url: string;
- title: string;
-}
-
-const strParser = new StringOutputParser();
-
-const createImageSearchChain = (llm: BaseChatModel) => {
- return RunnableSequence.from([
- RunnableMap.from({
- chat_history: (input: ImageSearchChainInput) => {
- return formatChatHistoryAsString(input.chat_history);
- },
- query: (input: ImageSearchChainInput) => {
- return input.query;
- },
- }),
- ChatPromptTemplate.fromMessages([
- ['system', imageSearchChainPrompt],
- [
- 'user',
- '\n \n\nWhat is a cat?\n ',
- ],
- ['assistant', 'A cat '],
-
- [
- 'user',
- '\n \n\nWhat is a car? How does it work?\n ',
- ],
- ['assistant', 'Car working '],
- [
- 'user',
- '\n \n\nHow does an AC work?\n ',
- ],
- ['assistant', 'AC working '],
- [
- 'user',
- '{chat_history} \n\n{query}\n ',
- ],
- ]),
- llm,
- strParser,
- RunnableLambda.from(async (input: string) => {
- const queryParser = new LineOutputParser({
- key: 'query',
- });
-
- return await queryParser.parse(input);
- }),
- RunnableLambda.from(async (input: string) => {
- const res = await searchSearxng(input, {
- engines: ['bing images', 'google images'],
- });
-
- const images: ImageSearchResult[] = [];
-
- res.results.forEach((result) => {
- if (result.img_src && result.url && result.title) {
- images.push({
- img_src: result.img_src,
- url: result.url,
- title: result.title,
- });
- }
- });
-
- return images.slice(0, 10);
- }),
- ]);
-};
-
-const handleImageSearch = (
- input: ImageSearchChainInput,
- llm: BaseChatModel,
-) => {
- const imageSearchChain = createImageSearchChain(llm);
- return imageSearchChain.invoke(input);
-};
-
-export default handleImageSearch;
diff --git a/src/lib/chains/suggestionGeneratorAgent.ts b/src/lib/chains/suggestionGeneratorAgent.ts
deleted file mode 100644
index 9129059..0000000
--- a/src/lib/chains/suggestionGeneratorAgent.ts
+++ /dev/null
@@ -1,55 +0,0 @@
-import { RunnableSequence, RunnableMap } from '@langchain/core/runnables';
-import ListLineOutputParser from '../outputParsers/listLineOutputParser';
-import { PromptTemplate } from '@langchain/core/prompts';
-import formatChatHistoryAsString from '../utils/formatHistory';
-import { BaseMessage } from '@langchain/core/messages';
-import { BaseChatModel } from '@langchain/core/language_models/chat_models';
-import { ChatOpenAI } from '@langchain/openai';
-
-const suggestionGeneratorPrompt = `
-You are an AI suggestion generator for an AI powered search engine. You will be given a conversation below. You need to generate 4-5 suggestions based on the conversation. The suggestion should be relevant to the conversation that can be used by the user to ask the chat model for more information.
-You need to make sure the suggestions are relevant to the conversation and are helpful to the user. Keep a note that the user might use these suggestions to ask a chat model for more information.
-Make sure the suggestions are medium in length and are informative and relevant to the conversation.
-
-Provide these suggestions separated by newlines between the XML tags and . For example:
-
-
-Tell me more about SpaceX and their recent projects
-What is the latest news on SpaceX?
-Who is the CEO of SpaceX?
-
-
-Conversation:
-{chat_history}
-`;
-
-type SuggestionGeneratorInput = {
- chat_history: BaseMessage[];
-};
-
-const outputParser = new ListLineOutputParser({
- key: 'suggestions',
-});
-
-const createSuggestionGeneratorChain = (llm: BaseChatModel) => {
- return RunnableSequence.from([
- RunnableMap.from({
- chat_history: (input: SuggestionGeneratorInput) =>
- formatChatHistoryAsString(input.chat_history),
- }),
- PromptTemplate.fromTemplate(suggestionGeneratorPrompt),
- llm,
- outputParser,
- ]);
-};
-
-const generateSuggestions = (
- input: SuggestionGeneratorInput,
- llm: BaseChatModel,
-) => {
- (llm as unknown as ChatOpenAI).temperature = 0;
- const suggestionGeneratorChain = createSuggestionGeneratorChain(llm);
- return suggestionGeneratorChain.invoke(input);
-};
-
-export default generateSuggestions;
diff --git a/src/lib/chains/videoSearchAgent.ts b/src/lib/chains/videoSearchAgent.ts
deleted file mode 100644
index 3f878a8..0000000
--- a/src/lib/chains/videoSearchAgent.ts
+++ /dev/null
@@ -1,110 +0,0 @@
-import {
- RunnableSequence,
- RunnableMap,
- RunnableLambda,
-} from '@langchain/core/runnables';
-import { ChatPromptTemplate } from '@langchain/core/prompts';
-import formatChatHistoryAsString from '../utils/formatHistory';
-import { BaseMessage } from '@langchain/core/messages';
-import { StringOutputParser } from '@langchain/core/output_parsers';
-import { searchSearxng } from '../searxng';
-import type { BaseChatModel } from '@langchain/core/language_models/chat_models';
-import LineOutputParser from '../outputParsers/lineOutputParser';
-
-const videoSearchChainPrompt = `
-You will be given a conversation below and a follow up question. You need to rephrase the follow-up question so it is a standalone question that can be used by the LLM to search Youtube for videos.
-You need to make sure the rephrased question agrees with the conversation and is relevant to the conversation.
-Output only the rephrased query wrapped in an XML element. Do not include any explanation or additional text.
-`;
-
-type VideoSearchChainInput = {
- chat_history: BaseMessage[];
- query: string;
-};
-
-interface VideoSearchResult {
- img_src: string;
- url: string;
- title: string;
- iframe_src: string;
-}
-
-const strParser = new StringOutputParser();
-
-const createVideoSearchChain = (llm: BaseChatModel) => {
- return RunnableSequence.from([
- RunnableMap.from({
- chat_history: (input: VideoSearchChainInput) => {
- return formatChatHistoryAsString(input.chat_history);
- },
- query: (input: VideoSearchChainInput) => {
- return input.query;
- },
- }),
- ChatPromptTemplate.fromMessages([
- ['system', videoSearchChainPrompt],
- [
- 'user',
- '\n \n\nHow does a car work?\n ',
- ],
- ['assistant', 'How does a car work? '],
- [
- 'user',
- '\n \n\nWhat is the theory of relativity?\n ',
- ],
- ['assistant', 'Theory of relativity '],
- [
- 'user',
- '\n \n\nHow does an AC work?\n ',
- ],
- ['assistant', 'AC working '],
- [
- 'user',
- '{chat_history} \n\n{query}\n ',
- ],
- ]),
- llm,
- strParser,
- RunnableLambda.from(async (input: string) => {
- const queryParser = new LineOutputParser({
- key: 'query',
- });
- return await queryParser.parse(input);
- }),
- RunnableLambda.from(async (input: string) => {
- const res = await searchSearxng(input, {
- engines: ['youtube'],
- });
-
- const videos: VideoSearchResult[] = [];
-
- res.results.forEach((result) => {
- if (
- result.thumbnail &&
- result.url &&
- result.title &&
- result.iframe_src
- ) {
- videos.push({
- img_src: result.thumbnail,
- url: result.url,
- title: result.title,
- iframe_src: result.iframe_src,
- });
- }
- });
-
- return videos.slice(0, 10);
- }),
- ]);
-};
-
-const handleVideoSearch = (
- input: VideoSearchChainInput,
- llm: BaseChatModel,
-) => {
- const videoSearchChain = createVideoSearchChain(llm);
- return videoSearchChain.invoke(input);
-};
-
-export default handleVideoSearch;
diff --git a/src/lib/config/clientRegistry.ts b/src/lib/config/clientRegistry.ts
index 7c8fc24..f23d7ad 100644
--- a/src/lib/config/clientRegistry.ts
+++ b/src/lib/config/clientRegistry.ts
@@ -11,3 +11,19 @@ export const getAutoMediaSearch = () =>
export const getSystemInstructions = () =>
getClientConfig('systemInstructions', '');
+
+export const getShowWeatherWidget = () =>
+ getClientConfig('showWeatherWidget', 'true') === 'true';
+
+export const getShowNewsWidget = () =>
+ getClientConfig('showNewsWidget', 'true') === 'true';
+
+export const getMeasurementUnit = () => {
+ const value =
+ getClientConfig('measureUnit') ??
+ getClientConfig('measurementUnit', 'metric');
+
+ if (typeof value !== 'string') return 'metric';
+
+ return value.toLowerCase();
+};
diff --git a/src/lib/config/index.ts b/src/lib/config/index.ts
index 9b69c8a..4830538 100644
--- a/src/lib/config/index.ts
+++ b/src/lib/config/index.ts
@@ -69,6 +69,24 @@ class ConfigManager {
default: true,
scope: 'client',
},
+ {
+ name: 'Show weather widget',
+ key: 'showWeatherWidget',
+ type: 'switch',
+ required: false,
+ description: 'Display the weather card on the home screen.',
+ default: true,
+ scope: 'client',
+ },
+ {
+ name: 'Show news widget',
+ key: 'showNewsWidget',
+ type: 'switch',
+ required: false,
+ description: 'Display the recent news card on the home screen.',
+ default: true,
+ scope: 'client',
+ },
],
personalization: [
{
diff --git a/src/lib/db/migrate.ts b/src/lib/db/migrate.ts
index e4c6987..e0efb7c 100644
--- a/src/lib/db/migrate.ts
+++ b/src/lib/db/migrate.ts
@@ -18,12 +18,18 @@ db.exec(`
`);
function sanitizeSql(content: string) {
- return content
- .split(/\r?\n/)
- .filter(
- (l) => !l.trim().startsWith('-->') && !l.includes('statement-breakpoint'),
+ const statements = content
+ .split(/--> statement-breakpoint/g)
+ .map((stmt) =>
+ stmt
+ .split(/\r?\n/)
+ .filter((l) => !l.trim().startsWith('-->'))
+ .join('\n')
+ .trim(),
)
- .join('\n');
+ .filter((stmt) => stmt.length > 0);
+
+ return statements;
}
fs.readdirSync(migrationsFolder)
@@ -32,13 +38,14 @@ fs.readdirSync(migrationsFolder)
.forEach((file) => {
const filePath = path.join(migrationsFolder, file);
let content = fs.readFileSync(filePath, 'utf-8');
- content = sanitizeSql(content);
+ const statements = sanitizeSql(content);
const migrationName = file.split('_')[0] || file;
const already = db
.prepare('SELECT 1 FROM ran_migrations WHERE name = ?')
.get(migrationName);
+
if (already) {
console.log(`Skipping already-applied migration: ${file}`);
return;
@@ -107,8 +114,167 @@ fs.readdirSync(migrationsFolder)
db.exec('DROP TABLE messages;');
db.exec('ALTER TABLE messages_with_sources RENAME TO messages;');
+ } else if (migrationName === '0002') {
+ /* Migrate chat */
+ db.exec(`
+ CREATE TABLE IF NOT EXISTS chats_new (
+ id TEXT PRIMARY KEY,
+ title TEXT NOT NULL,
+ createdAt TEXT NOT NULL,
+ sources TEXT DEFAULT '[]',
+ files TEXT DEFAULT '[]'
+ );
+ `);
+
+ const chats = db
+ .prepare('SELECT id, title, createdAt, files FROM chats')
+ .all();
+
+ const insertChat = db.prepare(`
+ INSERT INTO chats_new (id, title, createdAt, sources, files)
+ VALUES (?, ?, ?, ?, ?)
+ `);
+
+ chats.forEach((chat: any) => {
+ let files = chat.files;
+ while (typeof files === 'string') {
+ files = JSON.parse(files || '[]');
+ }
+
+ insertChat.run(
+ chat.id,
+ chat.title,
+ chat.createdAt,
+ '["web"]',
+ JSON.stringify(files),
+ );
+ });
+
+ db.exec('DROP TABLE chats;');
+ db.exec('ALTER TABLE chats_new RENAME TO chats;');
+
+ /* Migrate messages */
+
+ db.exec(`
+ CREATE TABLE IF NOT EXISTS messages_new (
+ id INTEGER PRIMARY KEY,
+ messageId TEXT NOT NULL,
+ chatId TEXT NOT NULL,
+ backendId TEXT NOT NULL,
+ query TEXT NOT NULL,
+ createdAt TEXT NOT NULL,
+ responseBlocks TEXT DEFAULT '[]',
+ status TEXT DEFAULT 'answering'
+ );
+ `);
+
+ const messages = db
+ .prepare(
+ 'SELECT id, messageId, chatId, type, content, createdAt, sources FROM messages ORDER BY id ASC',
+ )
+ .all();
+
+ const insertMessage = db.prepare(`
+ INSERT INTO messages_new (messageId, chatId, backendId, query, createdAt, responseBlocks, status)
+ VALUES (?, ?, ?, ?, ?, ?, ?)
+ `);
+
+ let currentMessageData: {
+ sources?: any[];
+ response?: string;
+ query?: string;
+ messageId?: string;
+ chatId?: string;
+ createdAt?: string;
+ } = {};
+ let lastCompleted = true;
+
+ messages.forEach((msg: any) => {
+ if (msg.type === 'user' && lastCompleted) {
+ currentMessageData = {};
+ currentMessageData.messageId = msg.messageId;
+ currentMessageData.chatId = msg.chatId;
+ currentMessageData.query = msg.content;
+ currentMessageData.createdAt = msg.createdAt;
+ lastCompleted = false;
+ } else if (msg.type === 'source' && !lastCompleted) {
+ let sources = msg.sources;
+
+ while (typeof sources === 'string') {
+ sources = JSON.parse(sources || '[]');
+ }
+
+ currentMessageData.sources = sources;
+ } else if (msg.type === 'assistant' && !lastCompleted) {
+ currentMessageData.response = msg.content;
+ insertMessage.run(
+ currentMessageData.messageId,
+ currentMessageData.chatId,
+ `${currentMessageData.messageId}-backend`,
+ currentMessageData.query,
+ currentMessageData.createdAt,
+ JSON.stringify([
+ {
+ id: crypto.randomUUID(),
+ type: 'text',
+ data: currentMessageData.response || '',
+ },
+ ...(currentMessageData.sources &&
+ currentMessageData.sources.length > 0
+ ? [
+ {
+ id: crypto.randomUUID(),
+ type: 'source',
+ data: currentMessageData.sources,
+ },
+ ]
+ : []),
+ ]),
+ 'completed',
+ );
+
+ lastCompleted = true;
+ } else if (msg.type === 'user' && !lastCompleted) {
+ /* Message wasn't completed so we'll just create the record with empty response */
+ insertMessage.run(
+ currentMessageData.messageId,
+ currentMessageData.chatId,
+ `${currentMessageData.messageId}-backend`,
+ currentMessageData.query,
+ currentMessageData.createdAt,
+ JSON.stringify([
+ {
+ id: crypto.randomUUID(),
+ type: 'text',
+ data: '',
+ },
+ ...(currentMessageData.sources &&
+ currentMessageData.sources.length > 0
+ ? [
+ {
+ id: crypto.randomUUID(),
+ type: 'source',
+ data: currentMessageData.sources,
+ },
+ ]
+ : []),
+ ]),
+ 'completed',
+ );
+
+ lastCompleted = true;
+ }
+ });
+
+ db.exec('DROP TABLE messages;');
+ db.exec('ALTER TABLE messages_new RENAME TO messages;');
} else {
- db.exec(content);
+ // Execute each statement separately
+ statements.forEach((stmt) => {
+ if (stmt.trim()) {
+ db.exec(stmt);
+ }
+ });
}
db.prepare('INSERT OR IGNORE INTO ran_migrations (name) VALUES (?)').run(
diff --git a/src/lib/db/schema.ts b/src/lib/db/schema.ts
index b6924ac..6017d9f 100644
--- a/src/lib/db/schema.ts
+++ b/src/lib/db/schema.ts
@@ -1,26 +1,24 @@
import { sql } from 'drizzle-orm';
import { text, integer, sqliteTable } from 'drizzle-orm/sqlite-core';
-import { Document } from '@langchain/core/documents';
+import { Block } from '../types';
+import { SearchSources } from '../agents/search/types';
export const messages = sqliteTable('messages', {
id: integer('id').primaryKey(),
- role: text('type', { enum: ['assistant', 'user', 'source'] }).notNull(),
- chatId: text('chatId').notNull(),
- createdAt: text('createdAt')
- .notNull()
- .default(sql`CURRENT_TIMESTAMP`),
messageId: text('messageId').notNull(),
-
- content: text('content'),
-
- sources: text('sources', {
- mode: 'json',
- })
- .$type()
+ chatId: text('chatId').notNull(),
+ backendId: text('backendId').notNull(),
+ query: text('query').notNull(),
+ createdAt: text('createdAt').notNull(),
+ responseBlocks: text('responseBlocks', { mode: 'json' })
+ .$type()
.default(sql`'[]'`),
+ status: text({ enum: ['answering', 'completed', 'error'] }).default(
+ 'answering',
+ ),
});
-interface File {
+interface DBFile {
name: string;
fileId: string;
}
@@ -29,8 +27,12 @@ export const chats = sqliteTable('chats', {
id: text('id').primaryKey(),
title: text('title').notNull(),
createdAt: text('createdAt').notNull(),
- focusMode: text('focusMode').notNull(),
+ sources: text('sources', {
+ mode: 'json',
+ })
+ .$type()
+ .default(sql`'[]'`),
files: text('files', { mode: 'json' })
- .$type()
+ .$type()
.default(sql`'[]'`),
});
diff --git a/src/lib/hooks/useChat.tsx b/src/lib/hooks/useChat.tsx
index ee7e9c7..fdb5743 100644
--- a/src/lib/hooks/useChat.tsx
+++ b/src/lib/hooks/useChat.tsx
@@ -1,13 +1,7 @@
'use client';
-import {
- AssistantMessage,
- ChatTurn,
- Message,
- SourceMessage,
- SuggestionMessage,
- UserMessage,
-} from '@/components/ChatWindow';
+import { Message } from '@/components/ChatWindow';
+import { Block } from '@/lib/types';
import {
createContext,
useContext,
@@ -22,25 +16,25 @@ import { toast } from 'sonner';
import { getSuggestions } from '../actions';
import { MinimalProvider } from '../models/types';
import { getAutoMediaSearch } from '../config/clientRegistry';
+import { applyPatch } from 'rfc6902';
+import { Widget } from '@/components/ChatWindow';
export type Section = {
- userMessage: UserMessage;
- assistantMessage: AssistantMessage | undefined;
- parsedAssistantMessage: string | undefined;
- speechMessage: string | undefined;
- sourceMessage: SourceMessage | undefined;
+ message: Message;
+ widgets: Widget[];
+ parsedTextBlocks: string[];
+ speechMessage: string;
thinkingEnded: boolean;
suggestions?: string[];
};
type ChatContext = {
messages: Message[];
- chatTurns: ChatTurn[];
sections: Section[];
chatHistory: [string, string][];
files: File[];
fileIds: string[];
- focusMode: string;
+ sources: string[];
chatId: string | undefined;
optimizationMode: string;
isMessagesLoaded: boolean;
@@ -51,8 +45,10 @@ type ChatContext = {
hasError: boolean;
chatModelProvider: ChatModelProvider;
embeddingModelProvider: EmbeddingModelProvider;
+ researchEnded: boolean;
+ setResearchEnded: (ended: boolean) => void;
setOptimizationMode: (mode: string) => void;
- setFocusMode: (mode: string) => void;
+ setSources: (sources: string[]) => void;
setFiles: (files: File[]) => void;
setFileIds: (fileIds: string[]) => void;
sendMessage: (
@@ -180,7 +176,7 @@ const loadMessages = async (
setMessages: (messages: Message[]) => void,
setIsMessagesLoaded: (loaded: boolean) => void,
setChatHistory: (history: [string, string][]) => void,
- setFocusMode: (mode: string) => void,
+ setSources: (sources: string[]) => void,
setNotFound: (notFound: boolean) => void,
setFiles: (files: File[]) => void,
setFileIds: (fileIds: string[]) => void,
@@ -204,18 +200,26 @@ const loadMessages = async (
setMessages(messages);
- const chatTurns = messages.filter(
- (msg): msg is ChatTurn => msg.role === 'user' || msg.role === 'assistant',
- );
+ const history: [string, string][] = [];
+ messages.forEach((msg) => {
+ history.push(['human', msg.query]);
- const history = chatTurns.map((msg) => {
- return [msg.role, msg.content];
- }) as [string, string][];
+ const textBlocks = msg.responseBlocks
+ .filter(
+ (block): block is Block & { type: 'text' } => block.type === 'text',
+ )
+ .map((block) => block.data)
+ .join('\n');
+
+ if (textBlocks) {
+ history.push(['assistant', textBlocks]);
+ }
+ });
console.debug(new Date(), 'app:messages_loaded');
- if (chatTurns.length > 0) {
- document.title = chatTurns[0].content;
+ if (messages.length > 0) {
+ document.title = messages[0].query;
}
const files = data.chat.files.map((file: any) => {
@@ -230,7 +234,7 @@ const loadMessages = async (
setFileIds(files.map((file: File) => file.fileId));
setChatHistory(history);
- setFocusMode(data.chat.focusMode);
+ setSources(data.chat.sources);
setIsMessagesLoaded(true);
};
@@ -239,31 +243,33 @@ export const chatContext = createContext({
chatId: '',
fileIds: [],
files: [],
- focusMode: '',
+ sources: [],
hasError: false,
isMessagesLoaded: false,
isReady: false,
loading: false,
messageAppeared: false,
messages: [],
- chatTurns: [],
sections: [],
notFound: false,
optimizationMode: '',
chatModelProvider: { key: '', providerId: '' },
embeddingModelProvider: { key: '', providerId: '' },
+ researchEnded: false,
rewrite: () => {},
sendMessage: async () => {},
setFileIds: () => {},
setFiles: () => {},
- setFocusMode: () => {},
+ setSources: () => {},
setOptimizationMode: () => {},
setChatModelProvider: () => {},
setEmbeddingModelProvider: () => {},
+ setResearchEnded: () => {},
});
export const ChatProvider = ({ children }: { children: React.ReactNode }) => {
const params: { chatId: string } = useParams();
+
const searchParams = useSearchParams();
const initialMessage = searchParams.get('q');
@@ -273,13 +279,15 @@ export const ChatProvider = ({ children }: { children: React.ReactNode }) => {
const [loading, setLoading] = useState(false);
const [messageAppeared, setMessageAppeared] = useState(false);
+ const [researchEnded, setResearchEnded] = useState(false);
+
const [chatHistory, setChatHistory] = useState<[string, string][]>([]);
const [messages, setMessages] = useState([]);
const [files, setFiles] = useState([]);
const [fileIds, setFileIds] = useState([]);
- const [focusMode, setFocusMode] = useState('webSearch');
+ const [sources, setSources] = useState(['web']);
const [optimizationMode, setOptimizationMode] = useState('speed');
const [isMessagesLoaded, setIsMessagesLoaded] = useState(false);
@@ -305,66 +313,44 @@ export const ChatProvider = ({ children }: { children: React.ReactNode }) => {
const messagesRef = useRef([]);
- const chatTurns = useMemo((): ChatTurn[] => {
- return messages.filter(
- (msg): msg is ChatTurn => msg.role === 'user' || msg.role === 'assistant',
- );
- }, [messages]);
-
const sections = useMemo(() => {
- const sections: Section[] = [];
+ return messages.map((msg) => {
+ const textBlocks: string[] = [];
+ let speechMessage = '';
+ let thinkingEnded = false;
+ let suggestions: string[] = [];
- messages.forEach((msg, i) => {
- if (msg.role === 'user') {
- const nextUserMessageIndex = messages.findIndex(
- (m, j) => j > i && m.role === 'user',
- );
+ const sourceBlocks = msg.responseBlocks.filter(
+ (block): block is Block & { type: 'source' } => block.type === 'source',
+ );
+ const sources = sourceBlocks.flatMap((block) => block.data);
- const aiMessage = messages.find(
- (m, j) =>
- j > i &&
- m.role === 'assistant' &&
- (nextUserMessageIndex === -1 || j < nextUserMessageIndex),
- ) as AssistantMessage | undefined;
+ const widgetBlocks = msg.responseBlocks
+ .filter((b) => b.type === 'widget')
+ .map((b) => b.data) as Widget[];
- const sourceMessage = messages.find(
- (m, j) =>
- j > i &&
- m.role === 'source' &&
- m.sources &&
- (nextUserMessageIndex === -1 || j < nextUserMessageIndex),
- ) as SourceMessage | undefined;
-
- let thinkingEnded = false;
- let processedMessage = aiMessage?.content ?? '';
- let speechMessage = aiMessage?.content ?? '';
- let suggestions: string[] = [];
-
- if (aiMessage) {
+ msg.responseBlocks.forEach((block) => {
+ if (block.type === 'text') {
+ let processedText = block.data;
const citationRegex = /\[([^\]]+)\]/g;
const regex = /\[(\d+)\]/g;
- if (processedMessage.includes('')) {
- const openThinkTag =
- processedMessage.match(//g)?.length || 0;
+ if (processedText.includes('')) {
+ const openThinkTag = processedText.match(//g)?.length || 0;
const closeThinkTag =
- processedMessage.match(/<\/think>/g)?.length || 0;
+ processedText.match(/<\/think>/g)?.length || 0;
if (openThinkTag && !closeThinkTag) {
- processedMessage += ' ';
+ processedText += ' ';
}
}
- if (aiMessage.content.includes(' ')) {
+ if (block.data.includes(' ')) {
thinkingEnded = true;
}
- if (
- sourceMessage &&
- sourceMessage.sources &&
- sourceMessage.sources.length > 0
- ) {
- processedMessage = processedMessage.replace(
+ if (sources.length > 0) {
+ processedText = processedText.replace(
citationRegex,
(_, capturedContent: string) => {
const numbers = capturedContent
@@ -379,7 +365,7 @@ export const ChatProvider = ({ children }: { children: React.ReactNode }) => {
return `[${numStr}]`;
}
- const source = sourceMessage.sources?.[number - 1];
+ const source = sources[number - 1];
const url = source?.metadata?.url;
if (url) {
@@ -393,38 +379,75 @@ export const ChatProvider = ({ children }: { children: React.ReactNode }) => {
return linksHtml;
},
);
- speechMessage = aiMessage.content.replace(regex, '');
+ speechMessage += block.data.replace(regex, '');
} else {
- processedMessage = processedMessage.replace(regex, '');
- speechMessage = aiMessage.content.replace(regex, '');
+ processedText = processedText.replace(regex, '');
+ speechMessage += block.data.replace(regex, '');
}
- const suggestionMessage = messages.find(
- (m, j) =>
- j > i &&
- m.role === 'suggestion' &&
- (nextUserMessageIndex === -1 || j < nextUserMessageIndex),
- ) as SuggestionMessage | undefined;
+ textBlocks.push(processedText);
+ } else if (block.type === 'suggestion') {
+ suggestions = block.data;
+ }
+ });
- if (suggestionMessage && suggestionMessage.suggestions.length > 0) {
- suggestions = suggestionMessage.suggestions;
+ return {
+ message: msg,
+ parsedTextBlocks: textBlocks,
+ speechMessage,
+ thinkingEnded,
+ suggestions,
+ widgets: widgetBlocks,
+ };
+ });
+ }, [messages]);
+
+ const checkReconnect = async () => {
+ setIsReady(true);
+ console.debug(new Date(), 'app:ready');
+
+ if (messages.length > 0) {
+ const lastMsg = messages[messages.length - 1];
+
+ if (lastMsg.status === 'answering') {
+ setLoading(true);
+ setResearchEnded(false);
+ setMessageAppeared(false);
+
+ const res = await fetch(`/api/reconnect/${lastMsg.backendId}`, {
+ method: 'POST',
+ });
+
+ if (!res.body) throw new Error('No response body');
+
+ const reader = res.body?.getReader();
+ const decoder = new TextDecoder('utf-8');
+
+ let partialChunk = '';
+
+ const messageHandler = getMessageHandler(lastMsg);
+
+ while (true) {
+ const { value, done } = await reader.read();
+ if (done) break;
+
+ partialChunk += decoder.decode(value, { stream: true });
+
+ try {
+ const messages = partialChunk.split('\n');
+ for (const msg of messages) {
+ if (!msg.trim()) continue;
+ const json = JSON.parse(msg);
+ messageHandler(json);
+ }
+ partialChunk = '';
+ } catch (error) {
+ console.warn('Incomplete JSON, waiting for next chunk...');
}
}
-
- sections.push({
- userMessage: msg,
- assistantMessage: aiMessage,
- sourceMessage: sourceMessage,
- parsedAssistantMessage: processedMessage,
- speechMessage,
- thinkingEnded,
- suggestions: suggestions,
- });
}
- });
-
- return sections;
- }, [messages]);
+ }
+ };
useEffect(() => {
checkConfig(
@@ -461,7 +484,7 @@ export const ChatProvider = ({ children }: { children: React.ReactNode }) => {
setMessages,
setIsMessagesLoaded,
setChatHistory,
- setFocusMode,
+ setSources,
setNotFound,
setFiles,
setFileIds,
@@ -479,34 +502,29 @@ export const ChatProvider = ({ children }: { children: React.ReactNode }) => {
}, [messages]);
useEffect(() => {
- if (isMessagesLoaded && isConfigReady) {
+ if (isMessagesLoaded && isConfigReady && newChatCreated) {
setIsReady(true);
console.debug(new Date(), 'app:ready');
+ } else if (isMessagesLoaded && isConfigReady && !newChatCreated) {
+ checkReconnect();
} else {
setIsReady(false);
}
- }, [isMessagesLoaded, isConfigReady]);
+ }, [isMessagesLoaded, isConfigReady, newChatCreated]);
const rewrite = (messageId: string) => {
const index = messages.findIndex((msg) => msg.messageId === messageId);
- const chatTurnsIndex = chatTurns.findIndex(
- (msg) => msg.messageId === messageId,
- );
if (index === -1) return;
- const message = chatTurns[chatTurnsIndex - 1];
+ setMessages((prev) => prev.slice(0, index));
- setMessages((prev) => {
- return [
- ...prev.slice(0, messages.length > 2 ? messages.indexOf(message) : 0),
- ];
- });
setChatHistory((prev) => {
- return [...prev.slice(0, chatTurns.length > 2 ? chatTurnsIndex - 1 : 0)];
+ return prev.slice(0, index * 2);
});
- sendMessage(message.content, message.messageId, true);
+ const messageToRewrite = messages[index];
+ sendMessage(messageToRewrite.query, messageToRewrite.messageId, true);
};
useEffect(() => {
@@ -520,95 +538,98 @@ export const ChatProvider = ({ children }: { children: React.ReactNode }) => {
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [isConfigReady, isReady, initialMessage]);
- const sendMessage: ChatContext['sendMessage'] = async (
- message,
- messageId,
- rewrite = false,
- ) => {
- if (loading || !message) return;
- setLoading(true);
- setMessageAppeared(false);
+ const getMessageHandler = (message: Message) => {
+ const messageId = message.messageId;
- if (messages.length <= 1) {
- window.history.replaceState(null, '', `/c/${chatId}`);
- }
-
- let recievedMessage = '';
- let added = false;
-
- messageId = messageId ?? crypto.randomBytes(7).toString('hex');
-
- setMessages((prevMessages) => [
- ...prevMessages,
- {
- content: message,
- messageId: messageId,
- chatId: chatId!,
- role: 'user',
- createdAt: new Date(),
- },
- ]);
-
- const messageHandler = async (data: any) => {
+ return async (data: any) => {
if (data.type === 'error') {
toast.error(data.data);
setLoading(false);
+ setMessages((prev) =>
+ prev.map((msg) =>
+ msg.messageId === messageId
+ ? { ...msg, status: 'error' as const }
+ : msg,
+ ),
+ );
return;
}
- if (data.type === 'sources') {
- setMessages((prevMessages) => [
- ...prevMessages,
- {
- messageId: data.messageId,
- chatId: chatId!,
- role: 'source',
- sources: data.data,
- createdAt: new Date(),
- },
- ]);
- if (data.data.length > 0) {
+ if (data.type === 'researchComplete') {
+ setResearchEnded(true);
+ if (
+ message.responseBlocks.find(
+ (b) => b.type === 'source' && b.data.length > 0,
+ )
+ ) {
setMessageAppeared(true);
}
}
- if (data.type === 'message') {
- if (!added) {
- setMessages((prevMessages) => [
- ...prevMessages,
- {
- content: data.data,
- messageId: data.messageId,
- chatId: chatId!,
- role: 'assistant',
- createdAt: new Date(),
- },
- ]);
- added = true;
- setMessageAppeared(true);
- } else {
- setMessages((prev) =>
- prev.map((message) => {
- if (
- message.messageId === data.messageId &&
- message.role === 'assistant'
- ) {
- return { ...message, content: message.content + data.data };
- }
+ if (data.type === 'block') {
+ setMessages((prev) =>
+ prev.map((msg) => {
+ if (msg.messageId === messageId) {
+ return {
+ ...msg,
+ responseBlocks: [...msg.responseBlocks, data.block],
+ };
+ }
+ return msg;
+ }),
+ );
- return message;
- }),
- );
+ if (
+ (data.block.type === 'source' && data.block.data.length > 0) ||
+ data.block.type === 'text'
+ ) {
+ setMessageAppeared(true);
}
- recievedMessage += data.data;
+ }
+
+ if (data.type === 'updateBlock') {
+ setMessages((prev) =>
+ prev.map((msg) => {
+ if (msg.messageId === messageId) {
+ const updatedBlocks = msg.responseBlocks.map((block) => {
+ if (block.id === data.blockId) {
+ const updatedBlock = { ...block };
+ applyPatch(updatedBlock, data.patch);
+ return updatedBlock;
+ }
+ return block;
+ });
+ return { ...msg, responseBlocks: updatedBlocks };
+ }
+ return msg;
+ }),
+ );
}
if (data.type === 'messageEnd') {
- setChatHistory((prevHistory) => [
- ...prevHistory,
- ['human', message],
- ['assistant', recievedMessage],
- ]);
+ const currentMsg = messagesRef.current.find(
+ (msg) => msg.messageId === messageId,
+ );
+
+ const newHistory: [string, string][] = [
+ ...chatHistory,
+ ['human', message.query],
+ [
+ 'assistant',
+ currentMsg?.responseBlocks.find((b) => b.type === 'text')?.data ||
+ '',
+ ],
+ ];
+
+ setChatHistory(newHistory);
+
+ setMessages((prev) =>
+ prev.map((msg) =>
+ msg.messageId === messageId
+ ? { ...msg, status: 'completed' as const }
+ : msg,
+ ),
+ );
setLoading(false);
@@ -626,41 +647,67 @@ export const ChatProvider = ({ children }: { children: React.ReactNode }) => {
?.click();
}
- /* Check if there are sources after message id's index and no suggestions */
+ // Check if there are sources and no suggestions
- const userMessageIndex = messagesRef.current.findIndex(
- (msg) => msg.messageId === messageId && msg.role === 'user',
+ const hasSourceBlocks = currentMsg?.responseBlocks.some(
+ (block) => block.type === 'source' && block.data.length > 0,
+ );
+ const hasSuggestions = currentMsg?.responseBlocks.some(
+ (block) => block.type === 'suggestion',
);
- const sourceMessage = messagesRef.current.find(
- (msg, i) => i > userMessageIndex && msg.role === 'source',
- ) as SourceMessage | undefined;
+ if (hasSourceBlocks && !hasSuggestions) {
+ const suggestions = await getSuggestions(newHistory);
+ const suggestionBlock: Block = {
+ id: crypto.randomBytes(7).toString('hex'),
+ type: 'suggestion',
+ data: suggestions,
+ };
- const suggestionMessageIndex = messagesRef.current.findIndex(
- (msg, i) => i > userMessageIndex && msg.role === 'suggestion',
- );
-
- if (
- sourceMessage &&
- sourceMessage.sources.length > 0 &&
- suggestionMessageIndex == -1
- ) {
- const suggestions = await getSuggestions(messagesRef.current);
- setMessages((prev) => {
- return [
- ...prev,
- {
- role: 'suggestion',
- suggestions: suggestions,
- chatId: chatId!,
- createdAt: new Date(),
- messageId: crypto.randomBytes(7).toString('hex'),
- },
- ];
- });
+ setMessages((prev) =>
+ prev.map((msg) => {
+ if (msg.messageId === messageId) {
+ return {
+ ...msg,
+ responseBlocks: [...msg.responseBlocks, suggestionBlock],
+ };
+ }
+ return msg;
+ }),
+ );
}
}
};
+ };
+
+ const sendMessage: ChatContext['sendMessage'] = async (
+ message,
+ messageId,
+ rewrite = false,
+ ) => {
+ if (loading || !message) return;
+ setLoading(true);
+ setResearchEnded(false);
+ setMessageAppeared(false);
+
+ if (messages.length <= 1) {
+ window.history.replaceState(null, '', `/c/${chatId}`);
+ }
+
+ messageId = messageId ?? crypto.randomBytes(7).toString('hex');
+ const backendId = crypto.randomBytes(20).toString('hex');
+
+ const newMessage: Message = {
+ messageId,
+ chatId: chatId!,
+ backendId,
+ query: message,
+ responseBlocks: [],
+ status: 'answering',
+ createdAt: new Date(),
+ };
+
+ setMessages((prevMessages) => [...prevMessages, newMessage]);
const messageIndex = messages.findIndex((m) => m.messageId === messageId);
@@ -678,7 +725,7 @@ export const ChatProvider = ({ children }: { children: React.ReactNode }) => {
},
chatId: chatId!,
files: fileIds,
- focusMode: focusMode,
+ sources: sources,
optimizationMode: optimizationMode,
history: rewrite
? chatHistory.slice(0, messageIndex === -1 ? undefined : messageIndex)
@@ -702,6 +749,8 @@ export const ChatProvider = ({ children }: { children: React.ReactNode }) => {
let partialChunk = '';
+ const messageHandler = getMessageHandler(newMessage);
+
while (true) {
const { value, done } = await reader.read();
if (done) break;
@@ -726,12 +775,11 @@ export const ChatProvider = ({ children }: { children: React.ReactNode }) => {
{
optimizationMode,
setFileIds,
setFiles,
- setFocusMode,
+ setSources,
setOptimizationMode,
rewrite,
sendMessage,
@@ -750,6 +798,8 @@ export const ChatProvider = ({ children }: { children: React.ReactNode }) => {
chatModelProvider,
embeddingModelProvider,
setEmbeddingModelProvider,
+ researchEnded,
+ setResearchEnded,
}}
>
{children}
diff --git a/src/lib/models/base/embedding.ts b/src/lib/models/base/embedding.ts
new file mode 100644
index 0000000..a817605
--- /dev/null
+++ b/src/lib/models/base/embedding.ts
@@ -0,0 +1,9 @@
+import { Chunk } from '@/lib/types';
+
+abstract class BaseEmbedding {
+ constructor(protected config: CONFIG) {}
+ abstract embedText(texts: string[]): Promise;
+ abstract embedChunks(chunks: Chunk[]): Promise;
+}
+
+export default BaseEmbedding;
diff --git a/src/lib/models/base/llm.ts b/src/lib/models/base/llm.ts
new file mode 100644
index 0000000..0e175e7
--- /dev/null
+++ b/src/lib/models/base/llm.ts
@@ -0,0 +1,22 @@
+import z from 'zod';
+import {
+ GenerateObjectInput,
+ GenerateOptions,
+ GenerateTextInput,
+ GenerateTextOutput,
+ StreamTextOutput,
+} from '../types';
+
+abstract class BaseLLM {
+ constructor(protected config: CONFIG) {}
+ abstract generateText(input: GenerateTextInput): Promise;
+ abstract streamText(
+ input: GenerateTextInput,
+ ): AsyncGenerator;
+ abstract generateObject(input: GenerateObjectInput): Promise>;
+ abstract streamObject(
+ input: GenerateObjectInput,
+ ): AsyncGenerator>>;
+}
+
+export default BaseLLM;
diff --git a/src/lib/models/providers/baseProvider.ts b/src/lib/models/base/provider.ts
similarity index 78%
rename from src/lib/models/providers/baseProvider.ts
rename to src/lib/models/base/provider.ts
index 980a2b2..cf69d49 100644
--- a/src/lib/models/providers/baseProvider.ts
+++ b/src/lib/models/base/provider.ts
@@ -1,7 +1,7 @@
-import { Embeddings } from '@langchain/core/embeddings';
-import { BaseChatModel } from '@langchain/core/language_models/chat_models';
-import { Model, ModelList, ProviderMetadata } from '../types';
+import { ModelList, ProviderMetadata } from '../types';
import { UIConfigField } from '@/lib/config/types';
+import BaseLLM from './llm';
+import BaseEmbedding from './embedding';
abstract class BaseModelProvider {
constructor(
@@ -11,8 +11,8 @@ abstract class BaseModelProvider {
) {}
abstract getDefaultModels(): Promise;
abstract getModelList(): Promise;
- abstract loadChatModel(modelName: string): Promise;
- abstract loadEmbeddingModel(modelName: string): Promise;
+ abstract loadChatModel(modelName: string): Promise>;
+ abstract loadEmbeddingModel(modelName: string): Promise>;
static getProviderConfigFields(): UIConfigField[] {
throw new Error('Method not implemented.');
}
diff --git a/src/lib/models/providers/aiml.ts b/src/lib/models/providers/aiml.ts
deleted file mode 100644
index 35ccf79..0000000
--- a/src/lib/models/providers/aiml.ts
+++ /dev/null
@@ -1,152 +0,0 @@
-import { BaseChatModel } from '@langchain/core/language_models/chat_models';
-import { Model, ModelList, ProviderMetadata } from '../types';
-import BaseModelProvider from './baseProvider';
-import { ChatOpenAI, OpenAIEmbeddings } from '@langchain/openai';
-import { Embeddings } from '@langchain/core/embeddings';
-import { UIConfigField } from '@/lib/config/types';
-import { getConfiguredModelProviderById } from '@/lib/config/serverRegistry';
-
-interface AimlConfig {
- apiKey: string;
-}
-
-const providerConfigFields: UIConfigField[] = [
- {
- type: 'password',
- name: 'API Key',
- key: 'apiKey',
- description: 'Your AI/ML API key',
- required: true,
- placeholder: 'AI/ML API Key',
- env: 'AIML_API_KEY',
- scope: 'server',
- },
-];
-
-class AimlProvider extends BaseModelProvider {
- constructor(id: string, name: string, config: AimlConfig) {
- super(id, name, config);
- }
-
- async getDefaultModels(): Promise {
- try {
- const res = await fetch('https://api.aimlapi.com/models', {
- method: 'GET',
- headers: {
- 'Content-Type': 'application/json',
- Authorization: `Bearer ${this.config.apiKey}`,
- },
- });
-
- const data = await res.json();
-
- const chatModels: Model[] = data.data
- .filter((m: any) => m.type === 'chat-completion')
- .map((m: any) => {
- return {
- name: m.id,
- key: m.id,
- };
- });
-
- const embeddingModels: Model[] = data.data
- .filter((m: any) => m.type === 'embedding')
- .map((m: any) => {
- return {
- name: m.id,
- key: m.id,
- };
- });
-
- return {
- embedding: embeddingModels,
- chat: chatModels,
- };
- } catch (err) {
- if (err instanceof TypeError) {
- throw new Error(
- 'Error connecting to AI/ML API. Please ensure your API key is correct and the service is available.',
- );
- }
-
- throw err;
- }
- }
-
- async getModelList(): Promise {
- const defaultModels = await this.getDefaultModels();
- const configProvider = getConfiguredModelProviderById(this.id)!;
-
- return {
- embedding: [
- ...defaultModels.embedding,
- ...configProvider.embeddingModels,
- ],
- chat: [...defaultModels.chat, ...configProvider.chatModels],
- };
- }
-
- async loadChatModel(key: string): Promise {
- const modelList = await this.getModelList();
-
- const exists = modelList.chat.find((m) => m.key === key);
-
- if (!exists) {
- throw new Error(
- 'Error Loading AI/ML API Chat Model. Invalid Model Selected',
- );
- }
-
- return new ChatOpenAI({
- apiKey: this.config.apiKey,
- temperature: 0.7,
- model: key,
- configuration: {
- baseURL: 'https://api.aimlapi.com',
- },
- });
- }
-
- async loadEmbeddingModel(key: string): Promise {
- const modelList = await this.getModelList();
- const exists = modelList.embedding.find((m) => m.key === key);
-
- if (!exists) {
- throw new Error(
- 'Error Loading AI/ML API Embedding Model. Invalid Model Selected.',
- );
- }
-
- return new OpenAIEmbeddings({
- apiKey: this.config.apiKey,
- model: key,
- configuration: {
- baseURL: 'https://api.aimlapi.com',
- },
- });
- }
-
- static parseAndValidate(raw: any): AimlConfig {
- if (!raw || typeof raw !== 'object')
- throw new Error('Invalid config provided. Expected object');
- if (!raw.apiKey)
- throw new Error('Invalid config provided. API key must be provided');
-
- return {
- apiKey: String(raw.apiKey),
- };
- }
-
- static getProviderConfigFields(): UIConfigField[] {
- return providerConfigFields;
- }
-
- static getProviderMetadata(): ProviderMetadata {
- return {
- key: 'aiml',
- name: 'AI/ML API',
- };
- }
-}
-
-export default AimlProvider;
diff --git a/src/lib/models/providers/anthropic/anthropicLLM.ts b/src/lib/models/providers/anthropic/anthropicLLM.ts
new file mode 100644
index 0000000..a020de6
--- /dev/null
+++ b/src/lib/models/providers/anthropic/anthropicLLM.ts
@@ -0,0 +1,5 @@
+import OpenAILLM from '../openai/openaiLLM';
+
+class AnthropicLLM extends OpenAILLM {}
+
+export default AnthropicLLM;
diff --git a/src/lib/models/providers/anthropic.ts b/src/lib/models/providers/anthropic/index.ts
similarity index 84%
rename from src/lib/models/providers/anthropic.ts
rename to src/lib/models/providers/anthropic/index.ts
index e071159..0342647 100644
--- a/src/lib/models/providers/anthropic.ts
+++ b/src/lib/models/providers/anthropic/index.ts
@@ -1,10 +1,10 @@
-import { BaseChatModel } from '@langchain/core/language_models/chat_models';
-import { Model, ModelList, ProviderMetadata } from '../types';
-import BaseModelProvider from './baseProvider';
-import { ChatAnthropic } from '@langchain/anthropic';
-import { Embeddings } from '@langchain/core/embeddings';
import { UIConfigField } from '@/lib/config/types';
import { getConfiguredModelProviderById } from '@/lib/config/serverRegistry';
+import { Model, ModelList, ProviderMetadata } from '../../types';
+import BaseEmbedding from '../../base/embedding';
+import BaseModelProvider from '../../base/provider';
+import BaseLLM from '../../base/llm';
+import AnthropicLLM from './anthropicLLM';
interface AnthropicConfig {
apiKey: string;
@@ -67,7 +67,7 @@ class AnthropicProvider extends BaseModelProvider {
};
}
- async loadChatModel(key: string): Promise {
+ async loadChatModel(key: string): Promise> {
const modelList = await this.getModelList();
const exists = modelList.chat.find((m) => m.key === key);
@@ -78,14 +78,14 @@ class AnthropicProvider extends BaseModelProvider {
);
}
- return new ChatAnthropic({
+ return new AnthropicLLM({
apiKey: this.config.apiKey,
- temperature: 0.7,
model: key,
+ baseURL: 'https://api.anthropic.com/v1',
});
}
- async loadEmbeddingModel(key: string): Promise {
+ async loadEmbeddingModel(key: string): Promise> {
throw new Error('Anthropic provider does not support embedding models.');
}
diff --git a/src/lib/models/providers/deepseek.ts b/src/lib/models/providers/deepseek.ts
deleted file mode 100644
index 9b29d83..0000000
--- a/src/lib/models/providers/deepseek.ts
+++ /dev/null
@@ -1,107 +0,0 @@
-import { BaseChatModel } from '@langchain/core/language_models/chat_models';
-import { Model, ModelList, ProviderMetadata } from '../types';
-import BaseModelProvider from './baseProvider';
-import { ChatOpenAI } from '@langchain/openai';
-import { Embeddings } from '@langchain/core/embeddings';
-import { UIConfigField } from '@/lib/config/types';
-import { getConfiguredModelProviderById } from '@/lib/config/serverRegistry';
-
-interface DeepSeekConfig {
- apiKey: string;
-}
-
-const defaultChatModels: Model[] = [
- {
- name: 'Deepseek Chat / DeepSeek V3.2 Exp',
- key: 'deepseek-chat',
- },
- {
- name: 'Deepseek Reasoner / DeepSeek V3.2 Exp',
- key: 'deepseek-reasoner',
- },
-];
-
-const providerConfigFields: UIConfigField[] = [
- {
- type: 'password',
- name: 'API Key',
- key: 'apiKey',
- description: 'Your DeepSeek API key',
- required: true,
- placeholder: 'DeepSeek API Key',
- env: 'DEEPSEEK_API_KEY',
- scope: 'server',
- },
-];
-
-class DeepSeekProvider extends BaseModelProvider {
- constructor(id: string, name: string, config: DeepSeekConfig) {
- super(id, name, config);
- }
-
- async getDefaultModels(): Promise {
- return {
- embedding: [],
- chat: defaultChatModels,
- };
- }
-
- async getModelList(): Promise {
- const defaultModels = await this.getDefaultModels();
- const configProvider = getConfiguredModelProviderById(this.id)!;
-
- return {
- embedding: [],
- chat: [...defaultModels.chat, ...configProvider.chatModels],
- };
- }
-
- async loadChatModel(key: string): Promise {
- const modelList = await this.getModelList();
-
- const exists = modelList.chat.find((m) => m.key === key);
-
- if (!exists) {
- throw new Error(
- 'Error Loading DeepSeek Chat Model. Invalid Model Selected',
- );
- }
-
- return new ChatOpenAI({
- apiKey: this.config.apiKey,
- temperature: 0.7,
- model: key,
- configuration: {
- baseURL: 'https://api.deepseek.com',
- },
- });
- }
-
- async loadEmbeddingModel(key: string): Promise {
- throw new Error('DeepSeek provider does not support embedding models.');
- }
-
- static parseAndValidate(raw: any): DeepSeekConfig {
- if (!raw || typeof raw !== 'object')
- throw new Error('Invalid config provided. Expected object');
- if (!raw.apiKey)
- throw new Error('Invalid config provided. API key must be provided');
-
- return {
- apiKey: String(raw.apiKey),
- };
- }
-
- static getProviderConfigFields(): UIConfigField[] {
- return providerConfigFields;
- }
-
- static getProviderMetadata(): ProviderMetadata {
- return {
- key: 'deepseek',
- name: 'Deepseek AI',
- };
- }
-}
-
-export default DeepSeekProvider;
diff --git a/src/lib/models/providers/gemini/geminiEmbedding.ts b/src/lib/models/providers/gemini/geminiEmbedding.ts
new file mode 100644
index 0000000..0054853
--- /dev/null
+++ b/src/lib/models/providers/gemini/geminiEmbedding.ts
@@ -0,0 +1,5 @@
+import OpenAIEmbedding from '../openai/openaiEmbedding';
+
+class GeminiEmbedding extends OpenAIEmbedding {}
+
+export default GeminiEmbedding;
diff --git a/src/lib/models/providers/gemini/geminiLLM.ts b/src/lib/models/providers/gemini/geminiLLM.ts
new file mode 100644
index 0000000..0a0d4ff
--- /dev/null
+++ b/src/lib/models/providers/gemini/geminiLLM.ts
@@ -0,0 +1,5 @@
+import OpenAILLM from '../openai/openaiLLM';
+
+class GeminiLLM extends OpenAILLM {}
+
+export default GeminiLLM;
diff --git a/src/lib/models/providers/gemini.ts b/src/lib/models/providers/gemini/index.ts
similarity index 80%
rename from src/lib/models/providers/gemini.ts
rename to src/lib/models/providers/gemini/index.ts
index 6cfd913..2eb92cd 100644
--- a/src/lib/models/providers/gemini.ts
+++ b/src/lib/models/providers/gemini/index.ts
@@ -1,13 +1,11 @@
-import { BaseChatModel } from '@langchain/core/language_models/chat_models';
-import { Model, ModelList, ProviderMetadata } from '../types';
-import BaseModelProvider from './baseProvider';
-import {
- ChatGoogleGenerativeAI,
- GoogleGenerativeAIEmbeddings,
-} from '@langchain/google-genai';
-import { Embeddings } from '@langchain/core/embeddings';
import { UIConfigField } from '@/lib/config/types';
import { getConfiguredModelProviderById } from '@/lib/config/serverRegistry';
+import { Model, ModelList, ProviderMetadata } from '../../types';
+import GeminiEmbedding from './geminiEmbedding';
+import BaseEmbedding from '../../base/embedding';
+import BaseModelProvider from '../../base/provider';
+import BaseLLM from '../../base/llm';
+import GeminiLLM from './geminiLLM';
interface GeminiConfig {
apiKey: string;
@@ -18,9 +16,9 @@ const providerConfigFields: UIConfigField[] = [
type: 'password',
name: 'API Key',
key: 'apiKey',
- description: 'Your Google Gemini API key',
+ description: 'Your Gemini API key',
required: true,
- placeholder: 'Google Gemini API Key',
+ placeholder: 'Gemini API Key',
env: 'GEMINI_API_KEY',
scope: 'server',
},
@@ -85,7 +83,7 @@ class GeminiProvider extends BaseModelProvider {
};
}
- async loadChatModel(key: string): Promise {
+ async loadChatModel(key: string): Promise> {
const modelList = await this.getModelList();
const exists = modelList.chat.find((m) => m.key === key);
@@ -96,14 +94,14 @@ class GeminiProvider extends BaseModelProvider {
);
}
- return new ChatGoogleGenerativeAI({
+ return new GeminiLLM({
apiKey: this.config.apiKey,
- temperature: 0.7,
model: key,
+ baseURL: 'https://generativelanguage.googleapis.com/v1beta/openai',
});
}
- async loadEmbeddingModel(key: string): Promise {
+ async loadEmbeddingModel(key: string): Promise> {
const modelList = await this.getModelList();
const exists = modelList.embedding.find((m) => m.key === key);
@@ -113,9 +111,10 @@ class GeminiProvider extends BaseModelProvider {
);
}
- return new GoogleGenerativeAIEmbeddings({
+ return new GeminiEmbedding({
apiKey: this.config.apiKey,
model: key,
+ baseURL: 'https://generativelanguage.googleapis.com/v1beta/openai',
});
}
@@ -137,7 +136,7 @@ class GeminiProvider extends BaseModelProvider {
static getProviderMetadata(): ProviderMetadata {
return {
key: 'gemini',
- name: 'Google Gemini',
+ name: 'Gemini',
};
}
}
diff --git a/src/lib/models/providers/groq/groqLLM.ts b/src/lib/models/providers/groq/groqLLM.ts
new file mode 100644
index 0000000..dfcb294
--- /dev/null
+++ b/src/lib/models/providers/groq/groqLLM.ts
@@ -0,0 +1,5 @@
+import OpenAILLM from '../openai/openaiLLM';
+
+class GroqLLM extends OpenAILLM {}
+
+export default GroqLLM;
diff --git a/src/lib/models/providers/groq.ts b/src/lib/models/providers/groq/index.ts
similarity index 58%
rename from src/lib/models/providers/groq.ts
rename to src/lib/models/providers/groq/index.ts
index a87ea88..f50323e 100644
--- a/src/lib/models/providers/groq.ts
+++ b/src/lib/models/providers/groq/index.ts
@@ -1,10 +1,10 @@
-import { BaseChatModel } from '@langchain/core/language_models/chat_models';
-import { Model, ModelList, ProviderMetadata } from '../types';
-import BaseModelProvider from './baseProvider';
-import { ChatGroq } from '@langchain/groq';
-import { Embeddings } from '@langchain/core/embeddings';
import { UIConfigField } from '@/lib/config/types';
import { getConfiguredModelProviderById } from '@/lib/config/serverRegistry';
+import { Model, ModelList, ProviderMetadata } from '../../types';
+import BaseEmbedding from '../../base/embedding';
+import BaseModelProvider from '../../base/provider';
+import BaseLLM from '../../base/llm';
+import GroqLLM from './groqLLM';
interface GroqConfig {
apiKey: string;
@@ -29,37 +29,29 @@ class GroqProvider extends BaseModelProvider {
}
async getDefaultModels(): Promise {
- try {
- const res = await fetch('https://api.groq.com/openai/v1/models', {
- method: 'GET',
- headers: {
- 'Content-Type': 'application/json',
- Authorization: `Bearer ${this.config.apiKey}`,
- },
+ const res = await fetch(`https://api.groq.com/openai/v1/models`, {
+ method: 'GET',
+ headers: {
+ 'Content-Type': 'application/json',
+ Authorization: `Bearer ${this.config.apiKey}`,
+ },
+ });
+
+ const data = await res.json();
+
+ const defaultChatModels: Model[] = [];
+
+ data.data.forEach((m: any) => {
+ defaultChatModels.push({
+ key: m.id,
+ name: m.id,
});
+ });
- const data = await res.json();
-
- const models: Model[] = data.data.map((m: any) => {
- return {
- name: m.id,
- key: m.id,
- };
- });
-
- return {
- embedding: [],
- chat: models,
- };
- } catch (err) {
- if (err instanceof TypeError) {
- throw new Error(
- 'Error connecting to Groq API. Please ensure your API key is correct and the Groq service is available.',
- );
- }
-
- throw err;
- }
+ return {
+ embedding: [],
+ chat: defaultChatModels,
+ };
}
async getModelList(): Promise {
@@ -67,12 +59,15 @@ class GroqProvider extends BaseModelProvider {
const configProvider = getConfiguredModelProviderById(this.id)!;
return {
- embedding: [],
+ embedding: [
+ ...defaultModels.embedding,
+ ...configProvider.embeddingModels,
+ ],
chat: [...defaultModels.chat, ...configProvider.chatModels],
};
}
- async loadChatModel(key: string): Promise {
+ async loadChatModel(key: string): Promise> {
const modelList = await this.getModelList();
const exists = modelList.chat.find((m) => m.key === key);
@@ -81,15 +76,15 @@ class GroqProvider extends BaseModelProvider {
throw new Error('Error Loading Groq Chat Model. Invalid Model Selected');
}
- return new ChatGroq({
+ return new GroqLLM({
apiKey: this.config.apiKey,
- temperature: 0.7,
model: key,
+ baseURL: 'https://api.groq.com/openai/v1',
});
}
- async loadEmbeddingModel(key: string): Promise {
- throw new Error('Groq provider does not support embedding models.');
+ async loadEmbeddingModel(key: string): Promise> {
+ throw new Error('Groq Provider does not support embedding models.');
}
static parseAndValidate(raw: any): GroqConfig {
diff --git a/src/lib/models/providers/index.ts b/src/lib/models/providers/index.ts
index addca61..ef52f4b 100644
--- a/src/lib/models/providers/index.ts
+++ b/src/lib/models/providers/index.ts
@@ -1,27 +1,21 @@
import { ModelProviderUISection } from '@/lib/config/types';
-import { ProviderConstructor } from './baseProvider';
+import { ProviderConstructor } from '../base/provider';
import OpenAIProvider from './openai';
import OllamaProvider from './ollama';
-import TransformersProvider from './transformers';
-import AnthropicProvider from './anthropic';
import GeminiProvider from './gemini';
+import TransformersProvider from './transformers';
import GroqProvider from './groq';
-import DeepSeekProvider from './deepseek';
-import LMStudioProvider from './lmstudio';
import LemonadeProvider from './lemonade';
-import AimlProvider from '@/lib/models/providers/aiml';
+import AnthropicProvider from './anthropic';
export const providers: Record> = {
openai: OpenAIProvider,
ollama: OllamaProvider,
- transformers: TransformersProvider,
- anthropic: AnthropicProvider,
gemini: GeminiProvider,
+ transformers: TransformersProvider,
groq: GroqProvider,
- deepseek: DeepSeekProvider,
- aiml: AimlProvider,
- lmstudio: LMStudioProvider,
lemonade: LemonadeProvider,
+ anthropic: AnthropicProvider,
};
export const getModelProvidersUIConfigSection =
diff --git a/src/lib/models/providers/lemonade.ts b/src/lib/models/providers/lemonade/index.ts
similarity index 74%
rename from src/lib/models/providers/lemonade.ts
rename to src/lib/models/providers/lemonade/index.ts
index 20680a8..d31676b 100644
--- a/src/lib/models/providers/lemonade.ts
+++ b/src/lib/models/providers/lemonade/index.ts
@@ -1,10 +1,11 @@
-import { BaseChatModel } from '@langchain/core/language_models/chat_models';
-import { Model, ModelList, ProviderMetadata } from '../types';
-import BaseModelProvider from './baseProvider';
-import { ChatOpenAI, OpenAIEmbeddings } from '@langchain/openai';
-import { Embeddings } from '@langchain/core/embeddings';
import { UIConfigField } from '@/lib/config/types';
import { getConfiguredModelProviderById } from '@/lib/config/serverRegistry';
+import BaseModelProvider from '../../base/provider';
+import { Model, ModelList, ProviderMetadata } from '../../types';
+import BaseLLM from '../../base/llm';
+import LemonadeLLM from './lemonadeLLM';
+import BaseEmbedding from '../../base/embedding';
+import LemonadeEmbedding from './lemonadeEmbedding';
interface LemonadeConfig {
baseURL: string;
@@ -41,27 +42,26 @@ class LemonadeProvider extends BaseModelProvider {
async getDefaultModels(): Promise {
try {
- const headers: Record = {
- 'Content-Type': 'application/json',
- };
-
- if (this.config.apiKey) {
- headers['Authorization'] = `Bearer ${this.config.apiKey}`;
- }
-
const res = await fetch(`${this.config.baseURL}/models`, {
method: 'GET',
- headers,
+ headers: {
+ 'Content-Type': 'application/json',
+ ...(this.config.apiKey
+ ? { Authorization: `Bearer ${this.config.apiKey}` }
+ : {}),
+ },
});
const data = await res.json();
- const models: Model[] = data.data.map((m: any) => {
- return {
- name: m.id,
- key: m.id,
- };
- });
+ const models: Model[] = data.data
+ .filter((m: any) => m.recipe === 'llamacpp')
+ .map((m: any) => {
+ return {
+ name: m.id,
+ key: m.id,
+ };
+ });
return {
embedding: models,
@@ -91,7 +91,7 @@ class LemonadeProvider extends BaseModelProvider {
};
}
- async loadChatModel(key: string): Promise {
+ async loadChatModel(key: string): Promise> {
const modelList = await this.getModelList();
const exists = modelList.chat.find((m) => m.key === key);
@@ -102,17 +102,14 @@ class LemonadeProvider extends BaseModelProvider {
);
}
- return new ChatOpenAI({
+ return new LemonadeLLM({
apiKey: this.config.apiKey || 'not-needed',
- temperature: 0.7,
model: key,
- configuration: {
- baseURL: this.config.baseURL,
- },
+ baseURL: this.config.baseURL,
});
}
- async loadEmbeddingModel(key: string): Promise {
+ async loadEmbeddingModel(key: string): Promise> {
const modelList = await this.getModelList();
const exists = modelList.embedding.find((m) => m.key === key);
@@ -122,12 +119,10 @@ class LemonadeProvider extends BaseModelProvider {
);
}
- return new OpenAIEmbeddings({
+ return new LemonadeEmbedding({
apiKey: this.config.apiKey || 'not-needed',
model: key,
- configuration: {
- baseURL: this.config.baseURL,
- },
+ baseURL: this.config.baseURL,
});
}
diff --git a/src/lib/models/providers/lemonade/lemonadeEmbedding.ts b/src/lib/models/providers/lemonade/lemonadeEmbedding.ts
new file mode 100644
index 0000000..7d720f8
--- /dev/null
+++ b/src/lib/models/providers/lemonade/lemonadeEmbedding.ts
@@ -0,0 +1,5 @@
+import OpenAIEmbedding from '../openai/openaiEmbedding';
+
+class LemonadeEmbedding extends OpenAIEmbedding {}
+
+export default LemonadeEmbedding;
diff --git a/src/lib/models/providers/lemonade/lemonadeLLM.ts b/src/lib/models/providers/lemonade/lemonadeLLM.ts
new file mode 100644
index 0000000..bfd3e28
--- /dev/null
+++ b/src/lib/models/providers/lemonade/lemonadeLLM.ts
@@ -0,0 +1,5 @@
+import OpenAILLM from '../openai/openaiLLM';
+
+class LemonadeLLM extends OpenAILLM {}
+
+export default LemonadeLLM;
diff --git a/src/lib/models/providers/lmstudio.ts b/src/lib/models/providers/lmstudio.ts
deleted file mode 100644
index 3a73a34..0000000
--- a/src/lib/models/providers/lmstudio.ts
+++ /dev/null
@@ -1,148 +0,0 @@
-import { BaseChatModel } from '@langchain/core/language_models/chat_models';
-import { Model, ModelList, ProviderMetadata } from '../types';
-import BaseModelProvider from './baseProvider';
-import { ChatOpenAI, OpenAIEmbeddings } from '@langchain/openai';
-import { Embeddings } from '@langchain/core/embeddings';
-import { UIConfigField } from '@/lib/config/types';
-import { getConfiguredModelProviderById } from '@/lib/config/serverRegistry';
-
-interface LMStudioConfig {
- baseURL: string;
-}
-
-const providerConfigFields: UIConfigField[] = [
- {
- type: 'string',
- name: 'Base URL',
- key: 'baseURL',
- description: 'The base URL for LM Studio server',
- required: true,
- placeholder: 'http://localhost:1234',
- env: 'LM_STUDIO_BASE_URL',
- scope: 'server',
- },
-];
-
-class LMStudioProvider extends BaseModelProvider {
- constructor(id: string, name: string, config: LMStudioConfig) {
- super(id, name, config);
- }
-
- private normalizeBaseURL(url: string): string {
- const trimmed = url.trim().replace(/\/+$/, '');
- return trimmed.endsWith('/v1') ? trimmed : `${trimmed}/v1`;
- }
-
- async getDefaultModels(): Promise {
- try {
- const baseURL = this.normalizeBaseURL(this.config.baseURL);
-
- const res = await fetch(`${baseURL}/models`, {
- method: 'GET',
- headers: {
- 'Content-Type': 'application/json',
- },
- });
-
- const data = await res.json();
-
- const models: Model[] = data.data.map((m: any) => {
- return {
- name: m.id,
- key: m.id,
- };
- });
-
- return {
- embedding: models,
- chat: models,
- };
- } catch (err) {
- if (err instanceof TypeError) {
- throw new Error(
- 'Error connecting to LM Studio. Please ensure the base URL is correct and the LM Studio server is running.',
- );
- }
-
- throw err;
- }
- }
-
- async getModelList(): Promise {
- const defaultModels = await this.getDefaultModels();
- const configProvider = getConfiguredModelProviderById(this.id)!;
-
- return {
- embedding: [
- ...defaultModels.embedding,
- ...configProvider.embeddingModels,
- ],
- chat: [...defaultModels.chat, ...configProvider.chatModels],
- };
- }
-
- async loadChatModel(key: string): Promise {
- const modelList = await this.getModelList();
-
- const exists = modelList.chat.find((m) => m.key === key);
-
- if (!exists) {
- throw new Error(
- 'Error Loading LM Studio Chat Model. Invalid Model Selected',
- );
- }
-
- return new ChatOpenAI({
- apiKey: 'lm-studio',
- temperature: 0.7,
- model: key,
- streaming: true,
- configuration: {
- baseURL: this.normalizeBaseURL(this.config.baseURL),
- },
- });
- }
-
- async loadEmbeddingModel(key: string): Promise {
- const modelList = await this.getModelList();
- const exists = modelList.embedding.find((m) => m.key === key);
-
- if (!exists) {
- throw new Error(
- 'Error Loading LM Studio Embedding Model. Invalid Model Selected.',
- );
- }
-
- return new OpenAIEmbeddings({
- apiKey: 'lm-studio',
- model: key,
- configuration: {
- baseURL: this.normalizeBaseURL(this.config.baseURL),
- },
- });
- }
-
- static parseAndValidate(raw: any): LMStudioConfig {
- if (!raw || typeof raw !== 'object')
- throw new Error('Invalid config provided. Expected object');
- if (!raw.baseURL)
- throw new Error('Invalid config provided. Base URL must be provided');
-
- return {
- baseURL: String(raw.baseURL),
- };
- }
-
- static getProviderConfigFields(): UIConfigField[] {
- return providerConfigFields;
- }
-
- static getProviderMetadata(): ProviderMetadata {
- return {
- key: 'lmstudio',
- name: 'LM Studio',
- };
- }
-}
-
-export default LMStudioProvider;
diff --git a/src/lib/models/providers/ollama.ts b/src/lib/models/providers/ollama/index.ts
similarity index 83%
rename from src/lib/models/providers/ollama.ts
rename to src/lib/models/providers/ollama/index.ts
index 9ae5899..762c2bf 100644
--- a/src/lib/models/providers/ollama.ts
+++ b/src/lib/models/providers/ollama/index.ts
@@ -1,10 +1,11 @@
-import { BaseChatModel } from '@langchain/core/language_models/chat_models';
-import { Model, ModelList, ProviderMetadata } from '../types';
-import BaseModelProvider from './baseProvider';
-import { ChatOllama, OllamaEmbeddings } from '@langchain/ollama';
-import { Embeddings } from '@langchain/core/embeddings';
import { UIConfigField } from '@/lib/config/types';
import { getConfiguredModelProviderById } from '@/lib/config/serverRegistry';
+import BaseModelProvider from '../../base/provider';
+import { Model, ModelList, ProviderMetadata } from '../../types';
+import BaseLLM from '../../base/llm';
+import BaseEmbedding from '../../base/embedding';
+import OllamaLLM from './ollamaLLM';
+import OllamaEmbedding from './ollamaEmbedding';
interface OllamaConfig {
baseURL: string;
@@ -76,7 +77,7 @@ class OllamaProvider extends BaseModelProvider {
};
}
- async loadChatModel(key: string): Promise {
+ async loadChatModel(key: string): Promise> {
const modelList = await this.getModelList();
const exists = modelList.chat.find((m) => m.key === key);
@@ -87,14 +88,13 @@ class OllamaProvider extends BaseModelProvider {
);
}
- return new ChatOllama({
- temperature: 0.7,
+ return new OllamaLLM({
+ baseURL: this.config.baseURL,
model: key,
- baseUrl: this.config.baseURL,
});
}
- async loadEmbeddingModel(key: string): Promise {
+ async loadEmbeddingModel(key: string): Promise> {
const modelList = await this.getModelList();
const exists = modelList.embedding.find((m) => m.key === key);
@@ -104,9 +104,9 @@ class OllamaProvider extends BaseModelProvider {
);
}
- return new OllamaEmbeddings({
+ return new OllamaEmbedding({
model: key,
- baseUrl: this.config.baseURL,
+ baseURL: this.config.baseURL,
});
}
diff --git a/src/lib/models/providers/ollama/ollamaEmbedding.ts b/src/lib/models/providers/ollama/ollamaEmbedding.ts
new file mode 100644
index 0000000..7bb00b8
--- /dev/null
+++ b/src/lib/models/providers/ollama/ollamaEmbedding.ts
@@ -0,0 +1,40 @@
+import { Ollama } from 'ollama';
+import BaseEmbedding from '../../base/embedding';
+import { Chunk } from '@/lib/types';
+
+type OllamaConfig = {
+ model: string;
+ baseURL?: string;
+};
+
+class OllamaEmbedding extends BaseEmbedding {
+ ollamaClient: Ollama;
+
+ constructor(protected config: OllamaConfig) {
+ super(config);
+
+ this.ollamaClient = new Ollama({
+ host: this.config.baseURL || 'http://localhost:11434',
+ });
+ }
+
+ async embedText(texts: string[]): Promise {
+ const response = await this.ollamaClient.embed({
+ input: texts,
+ model: this.config.model,
+ });
+
+ return response.embeddings;
+ }
+
+ async embedChunks(chunks: Chunk[]): Promise {
+ const response = await this.ollamaClient.embed({
+ input: chunks.map((c) => c.content),
+ model: this.config.model,
+ });
+
+ return response.embeddings;
+ }
+}
+
+export default OllamaEmbedding;
diff --git a/src/lib/models/providers/ollama/ollamaLLM.ts b/src/lib/models/providers/ollama/ollamaLLM.ts
new file mode 100644
index 0000000..9bf5139
--- /dev/null
+++ b/src/lib/models/providers/ollama/ollamaLLM.ts
@@ -0,0 +1,254 @@
+import z from 'zod';
+import BaseLLM from '../../base/llm';
+import {
+ GenerateObjectInput,
+ GenerateOptions,
+ GenerateTextInput,
+ GenerateTextOutput,
+ StreamTextOutput,
+} from '../../types';
+import { Ollama, Tool as OllamaTool, Message as OllamaMessage } from 'ollama';
+import { parse } from 'partial-json';
+import crypto from 'crypto';
+import { Message } from '@/lib/types';
+
+type OllamaConfig = {
+ baseURL: string;
+ model: string;
+ options?: GenerateOptions;
+};
+
+const reasoningModels = [
+ 'gpt-oss',
+ 'deepseek-r1',
+ 'qwen3',
+ 'deepseek-v3.1',
+ 'magistral',
+ 'nemotron-3-nano',
+];
+
+class OllamaLLM extends BaseLLM {
+ ollamaClient: Ollama;
+
+ constructor(protected config: OllamaConfig) {
+ super(config);
+
+ this.ollamaClient = new Ollama({
+ host: this.config.baseURL || 'http://localhost:11434',
+ });
+ }
+
+ convertToOllamaMessages(messages: Message[]): OllamaMessage[] {
+ return messages.map((msg) => {
+ if (msg.role === 'tool') {
+ return {
+ role: 'tool',
+ tool_name: msg.name,
+ content: msg.content,
+ } as OllamaMessage;
+ } else if (msg.role === 'assistant') {
+ return {
+ role: 'assistant',
+ content: msg.content,
+ tool_calls:
+ msg.tool_calls?.map((tc, i) => ({
+ function: {
+ index: i,
+ name: tc.name,
+ arguments: tc.arguments,
+ },
+ })) || [],
+ };
+ }
+
+ return msg;
+ });
+ }
+
+ async generateText(input: GenerateTextInput): Promise {
+ const ollamaTools: OllamaTool[] = [];
+
+ input.tools?.forEach((tool) => {
+ ollamaTools.push({
+ type: 'function',
+ function: {
+ name: tool.name,
+ description: tool.description,
+ parameters: z.toJSONSchema(tool.schema).properties,
+ },
+ });
+ });
+
+ const res = await this.ollamaClient.chat({
+ model: this.config.model,
+ messages: this.convertToOllamaMessages(input.messages),
+ tools: ollamaTools.length > 0 ? ollamaTools : undefined,
+ ...(reasoningModels.find((m) => this.config.model.includes(m))
+ ? { think: false }
+ : {}),
+ options: {
+ top_p: input.options?.topP ?? this.config.options?.topP,
+ temperature:
+ input.options?.temperature ?? this.config.options?.temperature ?? 0.7,
+ num_predict: input.options?.maxTokens ?? this.config.options?.maxTokens,
+ num_ctx: 32000,
+ frequency_penalty:
+ input.options?.frequencyPenalty ??
+ this.config.options?.frequencyPenalty,
+ presence_penalty:
+ input.options?.presencePenalty ??
+ this.config.options?.presencePenalty,
+ stop:
+ input.options?.stopSequences ?? this.config.options?.stopSequences,
+ },
+ });
+
+ return {
+ content: res.message.content,
+ toolCalls:
+ res.message.tool_calls?.map((tc) => ({
+ id: crypto.randomUUID(),
+ name: tc.function.name,
+ arguments: tc.function.arguments,
+ })) || [],
+ additionalInfo: {
+ reasoning: res.message.thinking,
+ },
+ };
+ }
+
+ async *streamText(
+ input: GenerateTextInput,
+ ): AsyncGenerator {
+ const ollamaTools: OllamaTool[] = [];
+
+ input.tools?.forEach((tool) => {
+ ollamaTools.push({
+ type: 'function',
+ function: {
+ name: tool.name,
+ description: tool.description,
+ parameters: z.toJSONSchema(tool.schema) as any,
+ },
+ });
+ });
+
+ const stream = await this.ollamaClient.chat({
+ model: this.config.model,
+ messages: this.convertToOllamaMessages(input.messages),
+ stream: true,
+ ...(reasoningModels.find((m) => this.config.model.includes(m))
+ ? { think: false }
+ : {}),
+ tools: ollamaTools.length > 0 ? ollamaTools : undefined,
+ options: {
+ top_p: input.options?.topP ?? this.config.options?.topP,
+ temperature:
+ input.options?.temperature ?? this.config.options?.temperature ?? 0.7,
+ num_ctx: 32000,
+ num_predict: input.options?.maxTokens ?? this.config.options?.maxTokens,
+ frequency_penalty:
+ input.options?.frequencyPenalty ??
+ this.config.options?.frequencyPenalty,
+ presence_penalty:
+ input.options?.presencePenalty ??
+ this.config.options?.presencePenalty,
+ stop:
+ input.options?.stopSequences ?? this.config.options?.stopSequences,
+ },
+ });
+
+ for await (const chunk of stream) {
+ yield {
+ contentChunk: chunk.message.content,
+ toolCallChunk:
+ chunk.message.tool_calls?.map((tc, i) => ({
+ id: crypto
+ .createHash('sha256')
+ .update(
+ `${i}-${tc.function.name}`,
+ ) /* Ollama currently doesn't return a tool call ID so we're creating one based on the index and tool call name */
+ .digest('hex'),
+ name: tc.function.name,
+ arguments: tc.function.arguments,
+ })) || [],
+ done: chunk.done,
+ additionalInfo: {
+ reasoning: chunk.message.thinking,
+ },
+ };
+ }
+ }
+
+ async generateObject(input: GenerateObjectInput): Promise {
+ const response = await this.ollamaClient.chat({
+ model: this.config.model,
+ messages: this.convertToOllamaMessages(input.messages),
+ format: z.toJSONSchema(input.schema),
+ ...(reasoningModels.find((m) => this.config.model.includes(m))
+ ? { think: false }
+ : {}),
+ options: {
+ top_p: input.options?.topP ?? this.config.options?.topP,
+ temperature:
+ input.options?.temperature ?? this.config.options?.temperature ?? 0.7,
+ num_predict: input.options?.maxTokens ?? this.config.options?.maxTokens,
+ frequency_penalty:
+ input.options?.frequencyPenalty ??
+ this.config.options?.frequencyPenalty,
+ presence_penalty:
+ input.options?.presencePenalty ??
+ this.config.options?.presencePenalty,
+ stop:
+ input.options?.stopSequences ?? this.config.options?.stopSequences,
+ },
+ });
+
+ try {
+ return input.schema.parse(JSON.parse(response.message.content)) as T;
+ } catch (err) {
+ throw new Error(`Error parsing response from Ollama: ${err}`);
+ }
+ }
+
+ async *streamObject(input: GenerateObjectInput): AsyncGenerator {
+ let recievedObj: string = '';
+
+ const stream = await this.ollamaClient.chat({
+ model: this.config.model,
+ messages: this.convertToOllamaMessages(input.messages),
+ format: z.toJSONSchema(input.schema),
+ stream: true,
+ ...(reasoningModels.find((m) => this.config.model.includes(m))
+ ? { think: false }
+ : {}),
+ options: {
+ top_p: input.options?.topP ?? this.config.options?.topP,
+ temperature:
+ input.options?.temperature ?? this.config.options?.temperature ?? 0.7,
+ num_predict: input.options?.maxTokens ?? this.config.options?.maxTokens,
+ frequency_penalty:
+ input.options?.frequencyPenalty ??
+ this.config.options?.frequencyPenalty,
+ presence_penalty:
+ input.options?.presencePenalty ??
+ this.config.options?.presencePenalty,
+ stop:
+ input.options?.stopSequences ?? this.config.options?.stopSequences,
+ },
+ });
+
+ for await (const chunk of stream) {
+ recievedObj += chunk.message.content;
+
+ try {
+ yield parse(recievedObj) as T;
+ } catch (err) {
+ console.log('Error parsing partial object from Ollama:', err);
+ yield {} as T;
+ }
+ }
+ }
+}
+
+export default OllamaLLM;
diff --git a/src/lib/models/providers/openai.ts b/src/lib/models/providers/openai/index.ts
similarity index 85%
rename from src/lib/models/providers/openai.ts
rename to src/lib/models/providers/openai/index.ts
index 6055b34..772e762 100644
--- a/src/lib/models/providers/openai.ts
+++ b/src/lib/models/providers/openai/index.ts
@@ -1,10 +1,11 @@
-import { BaseChatModel } from '@langchain/core/language_models/chat_models';
-import { Model, ModelList, ProviderMetadata } from '../types';
-import BaseModelProvider from './baseProvider';
-import { ChatOpenAI, OpenAIEmbeddings } from '@langchain/openai';
-import { Embeddings } from '@langchain/core/embeddings';
import { UIConfigField } from '@/lib/config/types';
import { getConfiguredModelProviderById } from '@/lib/config/serverRegistry';
+import { Model, ModelList, ProviderMetadata } from '../../types';
+import OpenAIEmbedding from './openaiEmbedding';
+import BaseEmbedding from '../../base/embedding';
+import BaseModelProvider from '../../base/provider';
+import BaseLLM from '../../base/llm';
+import OpenAILLM from './openaiLLM';
interface OpenAIConfig {
apiKey: string;
@@ -145,7 +146,7 @@ class OpenAIProvider extends BaseModelProvider {
};
}
- async loadChatModel(key: string): Promise {
+ async loadChatModel(key: string): Promise> {
const modelList = await this.getModelList();
const exists = modelList.chat.find((m) => m.key === key);
@@ -156,17 +157,14 @@ class OpenAIProvider extends BaseModelProvider {
);
}
- return new ChatOpenAI({
+ return new OpenAILLM({
apiKey: this.config.apiKey,
- temperature: 0.7,
model: key,
- configuration: {
- baseURL: this.config.baseURL,
- },
+ baseURL: this.config.baseURL,
});
}
- async loadEmbeddingModel(key: string): Promise {
+ async loadEmbeddingModel(key: string): Promise> {
const modelList = await this.getModelList();
const exists = modelList.embedding.find((m) => m.key === key);
@@ -176,12 +174,10 @@ class OpenAIProvider extends BaseModelProvider {
);
}
- return new OpenAIEmbeddings({
+ return new OpenAIEmbedding({
apiKey: this.config.apiKey,
model: key,
- configuration: {
- baseURL: this.config.baseURL,
- },
+ baseURL: this.config.baseURL,
});
}
diff --git a/src/lib/models/providers/openai/openaiEmbedding.ts b/src/lib/models/providers/openai/openaiEmbedding.ts
new file mode 100644
index 0000000..4e137ad
--- /dev/null
+++ b/src/lib/models/providers/openai/openaiEmbedding.ts
@@ -0,0 +1,42 @@
+import OpenAI from 'openai';
+import BaseEmbedding from '../../base/embedding';
+import { Chunk } from '@/lib/types';
+
+type OpenAIConfig = {
+ apiKey: string;
+ model: string;
+ baseURL?: string;
+};
+
+class OpenAIEmbedding extends BaseEmbedding {
+ openAIClient: OpenAI;
+
+ constructor(protected config: OpenAIConfig) {
+ super(config);
+
+ this.openAIClient = new OpenAI({
+ apiKey: config.apiKey,
+ baseURL: config.baseURL,
+ });
+ }
+
+ async embedText(texts: string[]): Promise {
+ const response = await this.openAIClient.embeddings.create({
+ model: this.config.model,
+ input: texts,
+ });
+
+ return response.data.map((embedding) => embedding.embedding);
+ }
+
+ async embedChunks(chunks: Chunk[]): Promise {
+ const response = await this.openAIClient.embeddings.create({
+ model: this.config.model,
+ input: chunks.map((c) => c.content),
+ });
+
+ return response.data.map((embedding) => embedding.embedding);
+ }
+}
+
+export default OpenAIEmbedding;
diff --git a/src/lib/models/providers/openai/openaiLLM.ts b/src/lib/models/providers/openai/openaiLLM.ts
new file mode 100644
index 0000000..a40714e
--- /dev/null
+++ b/src/lib/models/providers/openai/openaiLLM.ts
@@ -0,0 +1,268 @@
+import OpenAI from 'openai';
+import BaseLLM from '../../base/llm';
+import { zodTextFormat, zodResponseFormat } from 'openai/helpers/zod';
+import {
+ GenerateObjectInput,
+ GenerateOptions,
+ GenerateTextInput,
+ GenerateTextOutput,
+ StreamTextOutput,
+ ToolCall,
+} from '../../types';
+import { parse } from 'partial-json';
+import z from 'zod';
+import {
+ ChatCompletionAssistantMessageParam,
+ ChatCompletionMessageParam,
+ ChatCompletionTool,
+ ChatCompletionToolMessageParam,
+} from 'openai/resources/index.mjs';
+import { Message } from '@/lib/types';
+
+type OpenAIConfig = {
+ apiKey: string;
+ model: string;
+ baseURL?: string;
+ options?: GenerateOptions;
+};
+
+class OpenAILLM extends BaseLLM {
+ openAIClient: OpenAI;
+
+ constructor(protected config: OpenAIConfig) {
+ super(config);
+
+ this.openAIClient = new OpenAI({
+ apiKey: this.config.apiKey,
+ baseURL: this.config.baseURL || 'https://api.openai.com/v1',
+ });
+ }
+
+ convertToOpenAIMessages(messages: Message[]): ChatCompletionMessageParam[] {
+ return messages.map((msg) => {
+ if (msg.role === 'tool') {
+ return {
+ role: 'tool',
+ tool_call_id: msg.id,
+ content: msg.content,
+ } as ChatCompletionToolMessageParam;
+ } else if (msg.role === 'assistant') {
+ return {
+ role: 'assistant',
+ content: msg.content,
+ ...(msg.tool_calls &&
+ msg.tool_calls.length > 0 && {
+ tool_calls: msg.tool_calls?.map((tc) => ({
+ id: tc.id,
+ type: 'function',
+ function: {
+ name: tc.name,
+ arguments: JSON.stringify(tc.arguments),
+ },
+ })),
+ }),
+ } as ChatCompletionAssistantMessageParam;
+ }
+
+ return msg;
+ });
+ }
+
+ async generateText(input: GenerateTextInput): Promise {
+ const openaiTools: ChatCompletionTool[] = [];
+
+ input.tools?.forEach((tool) => {
+ openaiTools.push({
+ type: 'function',
+ function: {
+ name: tool.name,
+ description: tool.description,
+ parameters: z.toJSONSchema(tool.schema),
+ },
+ });
+ });
+
+ const response = await this.openAIClient.chat.completions.create({
+ model: this.config.model,
+ tools: openaiTools.length > 0 ? openaiTools : undefined,
+ messages: this.convertToOpenAIMessages(input.messages),
+ temperature:
+ input.options?.temperature ?? this.config.options?.temperature ?? 1.0,
+ top_p: input.options?.topP ?? this.config.options?.topP,
+ max_completion_tokens:
+ input.options?.maxTokens ?? this.config.options?.maxTokens,
+ stop: input.options?.stopSequences ?? this.config.options?.stopSequences,
+ frequency_penalty:
+ input.options?.frequencyPenalty ??
+ this.config.options?.frequencyPenalty,
+ presence_penalty:
+ input.options?.presencePenalty ?? this.config.options?.presencePenalty,
+ });
+
+ if (response.choices && response.choices.length > 0) {
+ return {
+ content: response.choices[0].message.content!,
+ toolCalls:
+ response.choices[0].message.tool_calls
+ ?.map((tc) => {
+ if (tc.type === 'function') {
+ return {
+ name: tc.function.name,
+ id: tc.id,
+ arguments: JSON.parse(tc.function.arguments),
+ };
+ }
+ })
+ .filter((tc) => tc !== undefined) || [],
+ additionalInfo: {
+ finishReason: response.choices[0].finish_reason,
+ },
+ };
+ }
+
+ throw new Error('No response from OpenAI');
+ }
+
+ async *streamText(
+ input: GenerateTextInput,
+ ): AsyncGenerator {
+ const openaiTools: ChatCompletionTool[] = [];
+
+ input.tools?.forEach((tool) => {
+ openaiTools.push({
+ type: 'function',
+ function: {
+ name: tool.name,
+ description: tool.description,
+ parameters: z.toJSONSchema(tool.schema),
+ },
+ });
+ });
+
+ const stream = await this.openAIClient.chat.completions.create({
+ model: this.config.model,
+ messages: this.convertToOpenAIMessages(input.messages),
+ tools: openaiTools.length > 0 ? openaiTools : undefined,
+ temperature:
+ input.options?.temperature ?? this.config.options?.temperature ?? 1.0,
+ top_p: input.options?.topP ?? this.config.options?.topP,
+ max_completion_tokens:
+ input.options?.maxTokens ?? this.config.options?.maxTokens,
+ stop: input.options?.stopSequences ?? this.config.options?.stopSequences,
+ frequency_penalty:
+ input.options?.frequencyPenalty ??
+ this.config.options?.frequencyPenalty,
+ presence_penalty:
+ input.options?.presencePenalty ?? this.config.options?.presencePenalty,
+ stream: true,
+ });
+
+ let recievedToolCalls: { name: string; id: string; arguments: string }[] =
+ [];
+
+ for await (const chunk of stream) {
+ if (chunk.choices && chunk.choices.length > 0) {
+ const toolCalls = chunk.choices[0].delta.tool_calls;
+ yield {
+ contentChunk: chunk.choices[0].delta.content || '',
+ toolCallChunk:
+ toolCalls?.map((tc) => {
+ if (tc.type === 'function') {
+ const call = {
+ name: tc.function?.name!,
+ id: tc.id!,
+ arguments: tc.function?.arguments || '',
+ };
+ recievedToolCalls.push(call);
+ return { ...call, arguments: parse(call.arguments || '{}') };
+ } else {
+ const existingCall = recievedToolCalls[tc.index];
+ existingCall.arguments += tc.function?.arguments || '';
+ return {
+ ...existingCall,
+ arguments: parse(existingCall.arguments),
+ };
+ }
+ }) || [],
+ done: chunk.choices[0].finish_reason !== null,
+ additionalInfo: {
+ finishReason: chunk.choices[0].finish_reason,
+ },
+ };
+ }
+ }
+ }
+
+ async generateObject(input: GenerateObjectInput): Promise {
+ const response = await this.openAIClient.chat.completions.parse({
+ messages: this.convertToOpenAIMessages(input.messages),
+ model: this.config.model,
+ temperature:
+ input.options?.temperature ?? this.config.options?.temperature ?? 1.0,
+ top_p: input.options?.topP ?? this.config.options?.topP,
+ max_completion_tokens:
+ input.options?.maxTokens ?? this.config.options?.maxTokens,
+ stop: input.options?.stopSequences ?? this.config.options?.stopSequences,
+ frequency_penalty:
+ input.options?.frequencyPenalty ??
+ this.config.options?.frequencyPenalty,
+ presence_penalty:
+ input.options?.presencePenalty ?? this.config.options?.presencePenalty,
+ response_format: zodResponseFormat(input.schema, 'object'),
+ });
+
+ if (response.choices && response.choices.length > 0) {
+ try {
+ return input.schema.parse(response.choices[0].message.parsed) as T;
+ } catch (err) {
+ throw new Error(`Error parsing response from OpenAI: ${err}`);
+ }
+ }
+
+ throw new Error('No response from OpenAI');
+ }
+
+ async *streamObject(input: GenerateObjectInput): AsyncGenerator {
+ let recievedObj: string = '';
+
+ const stream = this.openAIClient.responses.stream({
+ model: this.config.model,
+ input: input.messages,
+ temperature:
+ input.options?.temperature ?? this.config.options?.temperature ?? 1.0,
+ top_p: input.options?.topP ?? this.config.options?.topP,
+ max_completion_tokens:
+ input.options?.maxTokens ?? this.config.options?.maxTokens,
+ stop: input.options?.stopSequences ?? this.config.options?.stopSequences,
+ frequency_penalty:
+ input.options?.frequencyPenalty ??
+ this.config.options?.frequencyPenalty,
+ presence_penalty:
+ input.options?.presencePenalty ?? this.config.options?.presencePenalty,
+ text: {
+ format: zodTextFormat(input.schema, 'object'),
+ },
+ });
+
+ for await (const chunk of stream) {
+ if (chunk.type === 'response.output_text.delta' && chunk.delta) {
+ recievedObj += chunk.delta;
+
+ try {
+ yield parse(recievedObj) as T;
+ } catch (err) {
+ console.log('Error parsing partial object from OpenAI:', err);
+ yield {} as T;
+ }
+ } else if (chunk.type === 'response.output_text.done' && chunk.text) {
+ try {
+ yield parse(chunk.text) as T;
+ } catch (err) {
+ throw new Error(`Error parsing response from OpenAI: ${err}`);
+ }
+ }
+ }
+ }
+}
+
+export default OpenAILLM;
diff --git a/src/lib/models/providers/transformers.ts b/src/lib/models/providers/transformers/index.ts
similarity index 77%
rename from src/lib/models/providers/transformers.ts
rename to src/lib/models/providers/transformers/index.ts
index afd6b9e..e60e94f 100644
--- a/src/lib/models/providers/transformers.ts
+++ b/src/lib/models/providers/transformers/index.ts
@@ -1,10 +1,11 @@
-import { BaseChatModel } from '@langchain/core/language_models/chat_models';
-import { Model, ModelList, ProviderMetadata } from '../types';
-import BaseModelProvider from './baseProvider';
-import { Embeddings } from '@langchain/core/embeddings';
import { UIConfigField } from '@/lib/config/types';
import { getConfiguredModelProviderById } from '@/lib/config/serverRegistry';
-import { HuggingFaceTransformersEmbeddings } from '@langchain/community/embeddings/huggingface_transformers';
+import { Model, ModelList, ProviderMetadata } from '../../types';
+import BaseModelProvider from '../../base/provider';
+import BaseLLM from '../../base/llm';
+import BaseEmbedding from '../../base/embedding';
+import TransformerEmbedding from './transformerEmbedding';
+
interface TransformersConfig {}
const defaultEmbeddingModels: Model[] = [
@@ -49,11 +50,11 @@ class TransformersProvider extends BaseModelProvider {
};
}
- async loadChatModel(key: string): Promise {
+ async loadChatModel(key: string): Promise> {
throw new Error('Transformers Provider does not support chat models.');
}
- async loadEmbeddingModel(key: string): Promise {
+ async loadEmbeddingModel(key: string): Promise> {
const modelList = await this.getModelList();
const exists = modelList.embedding.find((m) => m.key === key);
@@ -63,7 +64,7 @@ class TransformersProvider extends BaseModelProvider {
);
}
- return new HuggingFaceTransformersEmbeddings({
+ return new TransformerEmbedding({
model: key,
});
}
diff --git a/src/lib/models/providers/transformers/transformerEmbedding.ts b/src/lib/models/providers/transformers/transformerEmbedding.ts
new file mode 100644
index 0000000..b3f43f0
--- /dev/null
+++ b/src/lib/models/providers/transformers/transformerEmbedding.ts
@@ -0,0 +1,43 @@
+import { Chunk } from '@/lib/types';
+import BaseEmbedding from '../../base/embedding';
+import { FeatureExtractionPipeline, pipeline } from '@huggingface/transformers';
+
+type TransformerConfig = {
+ model: string;
+};
+
+class TransformerEmbedding extends BaseEmbedding {
+ private pipelinePromise: Promise | null = null;
+
+ constructor(protected config: TransformerConfig) {
+ super(config);
+ }
+
+ async embedText(texts: string[]): Promise {
+ return this.embed(texts);
+ }
+
+ async embedChunks(chunks: Chunk[]): Promise {
+ return this.embed(chunks.map((c) => c.content));
+ }
+
+ async embed(texts: string[]): Promise {
+ if (!this.pipelinePromise) {
+ this.pipelinePromise = (async () => {
+ const transformers = await import('@huggingface/transformers');
+ return (await transformers.pipeline(
+ 'feature-extraction',
+ this.config.model,
+ )) as unknown as FeatureExtractionPipeline;
+ })();
+ }
+
+ const pipeline = await this.pipelinePromise;
+
+ const output = await pipeline(texts, { pooling: 'mean', normalize: true });
+
+ return output.tolist() as number[][];
+ }
+}
+
+export default TransformerEmbedding;
diff --git a/src/lib/models/registry.ts b/src/lib/models/registry.ts
index 5067b6d..687c84c 100644
--- a/src/lib/models/registry.ts
+++ b/src/lib/models/registry.ts
@@ -1,7 +1,5 @@
import { ConfigModelProvider } from '../config/types';
-import BaseModelProvider, {
- createProviderInstance,
-} from './providers/baseProvider';
+import BaseModelProvider, { createProviderInstance } from './base/provider';
import { getConfiguredModelProviders } from '../config/serverRegistry';
import { providers } from './providers';
import { MinimalProvider, ModelList } from './types';
diff --git a/src/lib/models/types.ts b/src/lib/models/types.ts
index fdd5df2..8abefd7 100644
--- a/src/lib/models/types.ts
+++ b/src/lib/models/types.ts
@@ -1,3 +1,6 @@
+import z from 'zod';
+import { Message } from '../types';
+
type Model = {
name: string;
key: string;
@@ -25,10 +28,76 @@ type ModelWithProvider = {
providerId: string;
};
+type GenerateOptions = {
+ temperature?: number;
+ maxTokens?: number;
+ topP?: number;
+ stopSequences?: string[];
+ frequencyPenalty?: number;
+ presencePenalty?: number;
+};
+
+type Tool = {
+ name: string;
+ description: string;
+ schema: z.ZodObject;
+};
+
+type ToolCall = {
+ id: string;
+ name: string;
+ arguments: Record;
+};
+
+type GenerateTextInput = {
+ messages: Message[];
+ tools?: Tool[];
+ options?: GenerateOptions;
+};
+
+type GenerateTextOutput = {
+ content: string;
+ toolCalls: ToolCall[];
+ additionalInfo?: Record;
+};
+
+type StreamTextOutput = {
+ contentChunk: string;
+ toolCallChunk: ToolCall[];
+ additionalInfo?: Record;
+ done?: boolean;
+};
+
+type GenerateObjectInput = {
+ schema: z.ZodTypeAny;
+ messages: Message[];
+ options?: GenerateOptions;
+};
+
+type GenerateObjectOutput = {
+ object: T;
+ additionalInfo?: Record;
+};
+
+type StreamObjectOutput = {
+ objectChunk: Partial;
+ additionalInfo?: Record;
+ done?: boolean;
+};
+
export type {
Model,
ModelList,
ProviderMetadata,
MinimalProvider,
ModelWithProvider,
+ GenerateOptions,
+ GenerateTextInput,
+ GenerateTextOutput,
+ StreamTextOutput,
+ GenerateObjectInput,
+ GenerateObjectOutput,
+ StreamObjectOutput,
+ Tool,
+ ToolCall,
};
diff --git a/src/lib/outputParsers/lineOutputParser.ts b/src/lib/outputParsers/lineOutputParser.ts
deleted file mode 100644
index 5c795f2..0000000
--- a/src/lib/outputParsers/lineOutputParser.ts
+++ /dev/null
@@ -1,48 +0,0 @@
-import { BaseOutputParser } from '@langchain/core/output_parsers';
-
-interface LineOutputParserArgs {
- key?: string;
-}
-
-class LineOutputParser extends BaseOutputParser {
- private key = 'questions';
-
- constructor(args?: LineOutputParserArgs) {
- super();
- this.key = args?.key ?? this.key;
- }
-
- static lc_name() {
- return 'LineOutputParser';
- }
-
- lc_namespace = ['langchain', 'output_parsers', 'line_output_parser'];
-
- async parse(text: string): Promise {
- text = text.trim() || '';
-
- const regex = /^(\s*(-|\*|\d+\.\s|\d+\)\s|\u2022)\s*)+/;
- const startKeyIndex = text.indexOf(`<${this.key}>`);
- const endKeyIndex = text.indexOf(`${this.key}>`);
-
- if (startKeyIndex === -1 || endKeyIndex === -1) {
- return undefined;
- }
-
- const questionsStartIndex =
- startKeyIndex === -1 ? 0 : startKeyIndex + `<${this.key}>`.length;
- const questionsEndIndex = endKeyIndex === -1 ? text.length : endKeyIndex;
- const line = text
- .slice(questionsStartIndex, questionsEndIndex)
- .trim()
- .replace(regex, '');
-
- return line;
- }
-
- getFormatInstructions(): string {
- throw new Error('Not implemented.');
- }
-}
-
-export default LineOutputParser;
diff --git a/src/lib/outputParsers/listLineOutputParser.ts b/src/lib/outputParsers/listLineOutputParser.ts
deleted file mode 100644
index 6409db9..0000000
--- a/src/lib/outputParsers/listLineOutputParser.ts
+++ /dev/null
@@ -1,50 +0,0 @@
-import { BaseOutputParser } from '@langchain/core/output_parsers';
-
-interface LineListOutputParserArgs {
- key?: string;
-}
-
-class LineListOutputParser extends BaseOutputParser {
- private key = 'questions';
-
- constructor(args?: LineListOutputParserArgs) {
- super();
- this.key = args?.key ?? this.key;
- }
-
- static lc_name() {
- return 'LineListOutputParser';
- }
-
- lc_namespace = ['langchain', 'output_parsers', 'line_list_output_parser'];
-
- async parse(text: string): Promise {
- text = text.trim() || '';
-
- const regex = /^(\s*(-|\*|\d+\.\s|\d+\)\s|\u2022)\s*)+/;
- const startKeyIndex = text.indexOf(`<${this.key}>`);
- const endKeyIndex = text.indexOf(`${this.key}>`);
-
- if (startKeyIndex === -1 || endKeyIndex === -1) {
- return [];
- }
-
- const questionsStartIndex =
- startKeyIndex === -1 ? 0 : startKeyIndex + `<${this.key}>`.length;
- const questionsEndIndex = endKeyIndex === -1 ? text.length : endKeyIndex;
- const lines = text
- .slice(questionsStartIndex, questionsEndIndex)
- .trim()
- .split('\n')
- .filter((line) => line.trim() !== '')
- .map((line) => line.replace(regex, ''));
-
- return lines;
- }
-
- getFormatInstructions(): string {
- throw new Error('Not implemented.');
- }
-}
-
-export default LineListOutputParser;
diff --git a/src/lib/prompts/index.ts b/src/lib/prompts/index.ts
deleted file mode 100644
index fd1a85a..0000000
--- a/src/lib/prompts/index.ts
+++ /dev/null
@@ -1,13 +0,0 @@
-import {
- webSearchResponsePrompt,
- webSearchRetrieverFewShots,
- webSearchRetrieverPrompt,
-} from './webSearch';
-import { writingAssistantPrompt } from './writingAssistant';
-
-export default {
- webSearchResponsePrompt,
- webSearchRetrieverPrompt,
- webSearchRetrieverFewShots,
- writingAssistantPrompt,
-};
diff --git a/src/lib/prompts/media/image.ts b/src/lib/prompts/media/image.ts
new file mode 100644
index 0000000..d4584cb
--- /dev/null
+++ b/src/lib/prompts/media/image.ts
@@ -0,0 +1,29 @@
+import { ChatTurnMessage } from '@/lib/types';
+
+export const imageSearchPrompt = `
+You will be given a conversation below and a follow up question. You need to rephrase the follow-up question so it is a standalone question that can be used by the LLM to search the web for images.
+You need to make sure the rephrased question agrees with the conversation and is relevant to the conversation.
+Output only the rephrased query in query key JSON format. Do not include any explanation or additional text.
+`;
+
+export const imageSearchFewShots: ChatTurnMessage[] = [
+ {
+ role: 'user',
+ content:
+ '\n \n\nWhat is a cat?\n ',
+ },
+ { role: 'assistant', content: '{"query":"A cat"}' },
+
+ {
+ role: 'user',
+ content:
+ '\n \n\nWhat is a car? How does it work?\n ',
+ },
+ { role: 'assistant', content: '{"query":"Car working"}' },
+ {
+ role: 'user',
+ content:
+ '\n \n\nHow does an AC work?\n ',
+ },
+ { role: 'assistant', content: '{"query":"AC working"}' },
+];
diff --git a/src/lib/prompts/media/videos.ts b/src/lib/prompts/media/videos.ts
new file mode 100644
index 0000000..adaa7b5
--- /dev/null
+++ b/src/lib/prompts/media/videos.ts
@@ -0,0 +1,28 @@
+import { ChatTurnMessage } from '@/lib/types';
+
+export const videoSearchPrompt = `
+You will be given a conversation below and a follow up question. You need to rephrase the follow-up question so it is a standalone question that can be used by the LLM to search Youtube for videos.
+You need to make sure the rephrased question agrees with the conversation and is relevant to the conversation.
+Output only the rephrased query in query key JSON format. Do not include any explanation or additional text.
+`;
+
+export const videoSearchFewShots: ChatTurnMessage[] = [
+ {
+ role: 'user',
+ content:
+ '\n \n\nHow does a car work?\n ',
+ },
+ { role: 'assistant', content: '{"query":"How does a car work?"}' },
+ {
+ role: 'user',
+ content:
+ '\n \n\nWhat is the theory of relativity?\n ',
+ },
+ { role: 'assistant', content: '{"query":"Theory of relativity"}' },
+ {
+ role: 'user',
+ content:
+ '\n \n\nHow does an AC work?\n ',
+ },
+ { role: 'assistant', content: '{"query":"AC working"}' },
+];
diff --git a/src/lib/prompts/search/classifier.ts b/src/lib/prompts/search/classifier.ts
new file mode 100644
index 0000000..770b86d
--- /dev/null
+++ b/src/lib/prompts/search/classifier.ts
@@ -0,0 +1,64 @@
+export const classifierPrompt = `
+
+Assistant is an advanced AI system designed to analyze the user query and the conversation history to determine the most appropriate classification for the search operation.
+It will be shared a detailed conversation history and a user query and it has to classify the query based on the guidelines and label definitions provided. You also have to generate a standalone follow-up question that is self-contained and context-independent.
+
+
+
+NOTE: BY GENERAL KNOWLEDGE WE MEAN INFORMATION THAT IS OBVIOUS, WIDELY KNOWN, OR CAN BE INFERRED WITHOUT EXTERNAL SOURCES FOR EXAMPLE MATHEMATICAL FACTS, BASIC SCIENTIFIC KNOWLEDGE, COMMON HISTORICAL EVENTS, ETC.
+1. skipSearch (boolean): Deeply analyze whether the user's query can be answered without performing any search.
+ - Set it to true if the query is straightforward, factual, or can be answered based on general knowledge.
+ - Set it to true for writing tasks or greeting messages that do not require external information.
+ - Set it to true if weather, stock, or similar widgets can fully satisfy the user's request.
+ - Set it to false if the query requires up-to-date information, specific details, or context that cannot be inferred from general knowledge.
+ - ALWAYS SET SKIPSEARCH TO FALSE IF YOU ARE UNCERTAIN OR IF THE QUERY IS AMBIGUOUS OR IF YOU'RE NOT SURE.
+2. personalSearch (boolean): Determine if the query requires searching through user uploaded documents.
+ - Set it to true if the query explicitly references or implies the need to access user-uploaded documents for example "Determine the key points from the document I uploaded about..." or "Who is the author?", "Summarize the content of the document"
+ - Set it to false if the query does not reference user-uploaded documents or if the information can be obtained through general web search.
+ - ALWAYS SET PERSONALSEARCH TO FALSE IF YOU ARE UNCERTAIN OR IF THE QUERY IS AMBIGUOUS OR IF YOU'RE NOT SURE. AND SET SKIPSEARCH TO FALSE AS WELL.
+3. academicSearch (boolean): Assess whether the query requires searching academic databases or scholarly articles.
+ - Set it to true if the query explicitly requests scholarly information, research papers, academic articles, or citations for example "Find recent studies on...", "What does the latest research say about...", or "Provide citations for..."
+ - Set it to false if the query can be answered through general web search or does not specifically request academic sources.
+4. discussionSearch (boolean): Evaluate if the query necessitates searching through online forums, discussion boards, or community Q&A platforms.
+ - Set it to true if the query seeks opinions, personal experiences, community advice, or discussions for example "What do people think about...", "Are there any discussions on...", or "What are the common issues faced by..."
+ - Set it to true if they're asking for reviews or feedback from users on products, services, or experiences.
+ - Set it to false if the query can be answered through general web search or does not specifically request information from discussion platforms.
+5. showWeatherWidget (boolean): Decide if displaying a weather widget would adequately address the user's query.
+ - Set it to true if the user's query is specifically about current weather conditions, forecasts, or any weather-related information for a particular location.
+ - Set it to true for queries like "What's the weather like in [Location]?" or "Will it rain tomorrow in [Location]?" or "Show me the weather" (Here they mean weather of their current location).
+ - If it can fully answer the user query without needing additional search, set skipSearch to true as well.
+6. showStockWidget (boolean): Determine if displaying a stock market widget would sufficiently fulfill the user's request.
+ - Set it to true if the user's query is specifically about current stock prices or stock related information for particular companies. Never use it for a market analysis or news about stock market.
+ - Set it to true for queries like "What's the stock price of [Company]?" or "How is the [Stock] performing today?" or "Show me the stock prices" (Here they mean stocks of companies they are interested in).
+ - If it can fully answer the user query without needing additional search, set skipSearch to true as well.
+7. showCalculationWidget (boolean): Decide if displaying a calculation widget would adequately address the user's query.
+ - Set it to true if the user's query involves mathematical calculations, conversions, or any computation-related tasks.
+ - Set it to true for queries like "What is 25% of 80?" or "Convert 100 USD to EUR" or "Calculate the square root of 256" or "What is 2 * 3 + 5?" or other mathematical expressions.
+ - If it can fully answer the user query without needing additional search, set skipSearch to true as well.
+
+
+
+For the standalone follow up, you have to generate a self contained, context independant reformulation of the user's query.
+You basically have to rephrase the user's query in a way that it can be understood without any prior context from the conversation history.
+Say for example the converastion is about cars and the user says "How do they work" then the standalone follow up should be "How do cars work?"
+
+Do not contain excess information or everything that has been discussed before, just reformulate the user's last query in a self contained manner.
+The standalone follow-up should be concise and to the point.
+
+
+
+You must respond in the following JSON format without any extra text, explanations or filler sentences:
+{
+ "classification": {
+ "skipSearch": boolean,
+ "personalSearch": boolean,
+ "academicSearch": boolean,
+ "discussionSearch": boolean,
+ "showWeatherWidget": boolean,
+ "showStockWidget": boolean,
+ "showCalculationWidget": boolean,
+ },
+ "standaloneFollowUp": string
+}
+
+`;
diff --git a/src/lib/prompts/search/researcher.ts b/src/lib/prompts/search/researcher.ts
new file mode 100644
index 0000000..537d488
--- /dev/null
+++ b/src/lib/prompts/search/researcher.ts
@@ -0,0 +1,354 @@
+import BaseEmbedding from '@/lib/models/base/embedding';
+import UploadStore from '@/lib/uploads/store';
+
+const getSpeedPrompt = (
+ actionDesc: string,
+ i: number,
+ maxIteration: number,
+ fileDesc: string,
+) => {
+ const today = new Date().toLocaleDateString('en-US', {
+ year: 'numeric',
+ month: 'long',
+ day: 'numeric',
+ });
+
+ return `
+ Assistant is an action orchestrator. Your job is to fulfill user requests by selecting and executing the available tools—no free-form replies.
+ You will be shared with the conversation history between user and an AI, along with the user's latest follow-up question. Based on this, you must use the available tools to fulfill the user's request.
+
+ Today's date: ${today}
+
+ You are currently on iteration ${i + 1} of your research process and have ${maxIteration} total iterations so act efficiently.
+ When you are finished, you must call the \`done\` tool. Never output text directly.
+
+
+ Fulfill the user's request as quickly as possible using the available tools.
+ Call tools to gather information or perform tasks as needed.
+
+
+
+ Your knowledge is outdated; if you have web search, use it to ground answers even for seemingly basic facts.
+
+
+
+
+ ## Example 1: Unknown Subject
+ User: "What is Kimi K2?"
+ Action: web_search ["Kimi K2", "Kimi K2 AI"] then done.
+
+ ## Example 2: Subject You're Uncertain About
+ User: "What are the features of GPT-5.1?"
+ Action: web_search ["GPT-5.1", "GPT-5.1 features", "GPT-5.1 release"] then done.
+
+ ## Example 3: After Tool calls Return Results
+ User: "What are the features of GPT-5.1?"
+ [Previous tool calls returned the needed info]
+ Action: done.
+
+
+
+
+ ${actionDesc}
+
+
+
+
+1. **Over-assuming**: Don't assume things exist or don't exist - just look them up
+
+2. **Verification obsession**: Don't waste tool calls "verifying existence" - just search for the thing directly
+
+3. **Endless loops**: If 2-3 tool calls don't find something, it probably doesn't exist - report that and move on
+
+4. **Ignoring task context**: If user wants a calendar event, don't just search - create the event
+
+5. **Overthinking**: Keep reasoning simple and tool calls focused
+
+
+
+
+- NEVER output normal text to the user. ONLY call tools.
+- Choose the appropriate tools based on the action descriptions provided above.
+- Default to web_search when information is missing or stale; keep queries targeted (max 3 per call).
+- Call done when you have gathered enough to answer or performed the required actions.
+- Do not invent tools. Do not return JSON.
+
+
+ ${
+ fileDesc.length > 0
+ ? `
+ The user has uploaded the following files which may be relevant to their request:
+ ${fileDesc}
+ You can use the uploaded files search tool to look for information within these documents if needed.
+ `
+ : ''
+ }
+ `;
+};
+
+const getBalancedPrompt = (
+ actionDesc: string,
+ i: number,
+ maxIteration: number,
+ fileDesc: string,
+) => {
+ const today = new Date().toLocaleDateString('en-US', {
+ year: 'numeric',
+ month: 'long',
+ day: 'numeric',
+ });
+
+ return `
+ Assistant is an action orchestrator. Your job is to fulfill user requests by reasoning briefly and executing the available tools—no free-form replies.
+ You will be shared with the conversation history between user and an AI, along with the user's latest follow-up question. Based on this, you must use the available tools to fulfill the user's request.
+
+ Today's date: ${today}
+
+ You are currently on iteration ${i + 1} of your research process and have ${maxIteration} total iterations so act efficiently.
+ When you are finished, you must call the \`done\` tool. Never output text directly.
+
+
+ Fulfill the user's request with concise reasoning plus focused actions.
+ You must call the __reasoning_preamble tool before every tool call in this assistant turn. Alternate: __reasoning_preamble → tool → __reasoning_preamble → tool ... and finish with __reasoning_preamble → done. Open each __reasoning_preamble with a brief intent phrase (e.g., "Okay, the user wants to...", "Searching for...", "Looking into...") and lay out your reasoning for the next step. Keep it natural language, no tool names.
+
+
+
+ Your knowledge is outdated; if you have web search, use it to ground answers even for seemingly basic facts.
+ You can call at most 6 tools total per turn: up to 2 reasoning (__reasoning_preamble counts as reasoning), 2-3 information-gathering calls, and 1 done. If you hit the cap, stop after done.
+ Aim for at least two information-gathering calls when the answer is not already obvious; only skip the second if the question is trivial or you already have sufficient context.
+ Do not spam searches—pick the most targeted queries.
+
+
+
+ Call done only after the reasoning plus the necessary tool calls are completed and you have enough to answer. If you call done early, stop. If you reach the tool cap, call done to conclude.
+
+
+
+
+ ## Example 1: Unknown Subject
+ User: "What is Kimi K2?"
+ Reason: "Okay, the user wants to know about Kimi K2. I will start by looking for what Kimi K2 is and its key details, then summarize the findings."
+ Action: web_search ["Kimi K2", "Kimi K2 AI"] then reasoning then done.
+
+ ## Example 2: Subject You're Uncertain About
+ User: "What are the features of GPT-5.1?"
+ Reason: "The user is asking about GPT-5.1 features. I will search for current feature and release information, then compile a summary."
+ Action: web_search ["GPT-5.1", "GPT-5.1 features", "GPT-5.1 release"] then reasoning then done.
+
+ ## Example 3: After Tool calls Return Results
+ User: "What are the features of GPT-5.1?"
+ [Previous tool calls returned the needed info]
+ Reason: "I have gathered enough information about GPT-5.1 features; I will now wrap up."
+ Action: done.
+
+
+
+
+ YOU MUST CALL __reasoning_preamble BEFORE EVERY TOOL CALL IN THIS ASSISTANT TURN. IF YOU DO NOT CALL IT, THE TOOL CALL WILL BE IGNORED.
+ ${actionDesc}
+
+
+
+
+1. **Over-assuming**: Don't assume things exist or don't exist - just look them up
+
+2. **Verification obsession**: Don't waste tool calls "verifying existence" - just search for the thing directly
+
+3. **Endless loops**: If 2-3 tool calls don't find something, it probably doesn't exist - report that and move on
+
+4. **Ignoring task context**: If user wants a calendar event, don't just search - create the event
+
+5. **Overthinking**: Keep reasoning simple and tool calls focused
+
+6. **Skipping the reasoning step**: Always call __reasoning_preamble first to outline your approach before other actions
+
+
+
+
+- NEVER output normal text to the user. ONLY call tools.
+- Start with __reasoning_preamble and call __reasoning_preamble before every tool call (including done): open with intent phrase ("Okay, the user wants to...", "Looking into...", etc.) and lay out your reasoning for the next step. No tool names.
+- Choose tools based on the action descriptions provided above.
+- Default to web_search when information is missing or stale; keep queries targeted (max 3 per call).
+- Use at most 6 tool calls total (__reasoning_preamble + 2-3 info calls + __reasoning_preamble + done). If done is called early, stop.
+- Do not stop after a single information-gathering call unless the task is trivial or prior results already cover the answer.
+- Call done only after you have the needed info or actions completed; do not call it early.
+- Do not invent tools. Do not return JSON.
+
+
+ ${
+ fileDesc.length > 0
+ ? `
+ The user has uploaded the following files which may be relevant to their request:
+ ${fileDesc}
+ You can use the uploaded files search tool to look for information within these documents if needed.
+ `
+ : ''
+ }
+ `;
+};
+
+const getQualityPrompt = (
+ actionDesc: string,
+ i: number,
+ maxIteration: number,
+ fileDesc: string,
+) => {
+ const today = new Date().toLocaleDateString('en-US', {
+ year: 'numeric',
+ month: 'long',
+ day: 'numeric',
+ });
+
+ return `
+ Assistant is a deep-research orchestrator. Your job is to fulfill user requests with the most thorough, comprehensive research possible—no free-form replies.
+ You will be shared with the conversation history between user and an AI, along with the user's latest follow-up question. Based on this, you must use the available tools to fulfill the user's request with depth and rigor.
+
+ Today's date: ${today}
+
+ You are currently on iteration ${i + 1} of your research process and have ${maxIteration} total iterations. Use every iteration wisely to gather comprehensive information.
+ When you are finished, you must call the \`done\` tool. Never output text directly.
+
+
+ Conduct the deepest, most thorough research possible. Leave no stone unturned.
+ Follow an iterative reason-act loop: call __reasoning_preamble before every tool call to outline the next step, then call the tool, then __reasoning_preamble again to reflect and decide the next step. Repeat until you have exhaustive coverage.
+ Open each __reasoning_preamble with a brief intent phrase (e.g., "Okay, the user wants to know about...", "From the results, it looks like...", "Now I need to dig into...") and describe what you'll do next. Keep it natural language, no tool names.
+ Finish with done only when you have comprehensive, multi-angle information.
+
+
+
+ Your knowledge is outdated; always use the available tools to ground answers.
+ This is DEEP RESEARCH mode—be exhaustive. Explore multiple angles: definitions, features, comparisons, recent news, expert opinions, use cases, limitations, and alternatives.
+ You can call up to 10 tools total per turn. Use an iterative loop: __reasoning_preamble → tool call(s) → __reasoning_preamble → tool call(s) → ... → __reasoning_preamble → done.
+ Never settle for surface-level answers. If results hint at more depth, reason about your next step and follow up. Cross-reference information from multiple queries.
+
+
+
+ Call done only after you have gathered comprehensive, multi-angle information. Do not call done early—exhaust your research budget first. If you reach the tool cap, call done to conclude.
+
+
+
+
+ ## Example 1: Unknown Subject - Deep Dive
+ User: "What is Kimi K2?"
+ Reason: "Okay, the user wants to know about Kimi K2. I'll start by finding out what it is and its key capabilities."
+ [calls info-gathering tool]
+ Reason: "From the results, Kimi K2 is an AI model by Moonshot. Now I need to dig into how it compares to competitors and any recent news."
+ [calls info-gathering tool]
+ Reason: "Got comparison info. Let me also check for limitations or critiques to give a balanced view."
+ [calls info-gathering tool]
+ Reason: "I now have comprehensive coverage—definition, capabilities, comparisons, and critiques. Wrapping up."
+ Action: done.
+
+ ## Example 2: Feature Research - Comprehensive
+ User: "What are the features of GPT-5.1?"
+ Reason: "The user wants comprehensive GPT-5.1 feature information. I'll start with core features and specs."
+ [calls info-gathering tool]
+ Reason: "Got the basics. Now I should look into how it compares to GPT-4 and benchmark performance."
+ [calls info-gathering tool]
+ Reason: "Good comparison data. Let me also gather use cases and expert opinions for depth."
+ [calls info-gathering tool]
+ Reason: "I have exhaustive coverage across features, comparisons, benchmarks, and reviews. Done."
+ Action: done.
+
+ ## Example 3: Iterative Refinement
+ User: "Tell me about quantum computing applications in healthcare."
+ Reason: "Okay, the user wants to know about quantum computing in healthcare. I'll start with an overview of current applications."
+ [calls info-gathering tool]
+ Reason: "Results mention drug discovery and diagnostics. Let me dive deeper into drug discovery use cases."
+ [calls info-gathering tool]
+ Reason: "Now I'll explore the diagnostics angle and any recent breakthroughs."
+ [calls info-gathering tool]
+ Reason: "Comprehensive coverage achieved. Wrapping up."
+ Action: done.
+
+
+
+
+ YOU MUST CALL __reasoning_preamble BEFORE EVERY TOOL CALL IN THIS ASSISTANT TURN. IF YOU DO NOT CALL IT, THE TOOL CALL WILL BE IGNORED.
+ ${actionDesc}
+
+
+
+ For any topic, consider searching:
+ 1. **Core definition/overview** - What is it?
+ 2. **Features/capabilities** - What can it do?
+ 3. **Comparisons** - How does it compare to alternatives?
+ 4. **Recent news/updates** - What's the latest?
+ 5. **Reviews/opinions** - What do experts say?
+ 6. **Use cases** - How is it being used?
+ 7. **Limitations/critiques** - What are the downsides?
+
+
+
+
+1. **Shallow research**: Don't stop after one or two searches—dig deeper from multiple angles
+
+2. **Over-assuming**: Don't assume things exist or don't exist - just look them up
+
+3. **Missing perspectives**: Search for both positive and critical viewpoints
+
+4. **Ignoring follow-ups**: If results hint at interesting sub-topics, explore them
+
+5. **Premature done**: Don't call done until you've exhausted reasonable research avenues
+
+6. **Skipping the reasoning step**: Always call __reasoning_preamble first to outline your research strategy
+
+
+
+
+- NEVER output normal text to the user. ONLY call tools.
+- Follow an iterative loop: __reasoning_preamble → tool call → __reasoning_preamble → tool call → ... → __reasoning_preamble → done.
+- Each __reasoning_preamble should reflect on previous results (if any) and state the next research step. No tool names in the reasoning.
+- Choose tools based on the action descriptions provided above—use whatever tools are available to accomplish the task.
+- Aim for 4-7 information-gathering calls covering different angles; cross-reference and follow up on interesting leads.
+- Call done only after comprehensive, multi-angle research is complete.
+- Do not invent tools. Do not return JSON.
+
+
+ ${
+ fileDesc.length > 0
+ ? `
+ The user has uploaded the following files which may be relevant to their request:
+ ${fileDesc}
+ You can use the uploaded files search tool to look for information within these documents if needed.
+ `
+ : ''
+ }
+ `;
+};
+
+export const getResearcherPrompt = (
+ actionDesc: string,
+ mode: 'speed' | 'balanced' | 'quality',
+ i: number,
+ maxIteration: number,
+ fileIds: string[],
+) => {
+ let prompt = '';
+
+ const filesData = UploadStore.getFileData(fileIds);
+
+ const fileDesc = filesData
+ .map(
+ (f) =>
+ `${f.fileName} ${f.initialContent} `,
+ )
+ .join('\n');
+
+ switch (mode) {
+ case 'speed':
+ prompt = getSpeedPrompt(actionDesc, i, maxIteration, fileDesc);
+ break;
+ case 'balanced':
+ prompt = getBalancedPrompt(actionDesc, i, maxIteration, fileDesc);
+ break;
+ case 'quality':
+ prompt = getQualityPrompt(actionDesc, i, maxIteration, fileDesc);
+ break;
+ default:
+ prompt = getSpeedPrompt(actionDesc, i, maxIteration, fileDesc);
+ break;
+ }
+
+ return prompt;
+};
diff --git a/src/lib/prompts/webSearch.ts b/src/lib/prompts/search/writer.ts
similarity index 59%
rename from src/lib/prompts/webSearch.ts
rename to src/lib/prompts/search/writer.ts
index b99b542..02ec6de 100644
--- a/src/lib/prompts/webSearch.ts
+++ b/src/lib/prompts/search/writer.ts
@@ -1,95 +1,10 @@
-import { BaseMessageLike } from '@langchain/core/messages';
-
-export const webSearchRetrieverPrompt = `
-You are an AI question rephraser. You will be given a conversation and a follow-up question, you will have to rephrase the follow up question so it is a standalone question and can be used by another LLM to search the web for information to answer it.
-If it is a simple writing task or a greeting (unless the greeting contains a question after it) like Hi, Hello, How are you, etc. than a question then you need to return \`not_needed\` as the response (This is because the LLM won't need to search the web for finding information on this topic).
-If the user asks some question from some URL or wants you to summarize a PDF or a webpage (via URL) you need to return the links inside the \`links\` XML block and the question inside the \`question\` XML block. If the user wants to you to summarize the webpage or the PDF you need to return \`summarize\` inside the \`question\` XML block in place of a question and the link to summarize in the \`links\` XML block.
-You must always return the rephrased question inside the \`question\` XML block, if there are no links in the follow-up question then don't insert a \`links\` XML block in your response.
-
-**Note**: All user messages are individual entities and should be treated as such do not mix conversations.
-`;
-
-export const webSearchRetrieverFewShots: BaseMessageLike[] = [
- [
- 'user',
- `
-
-
-What is the capital of France
- `,
- ],
- [
- 'assistant',
- `
-Capital of france
- `,
- ],
- [
- 'user',
- `
-
-
-Hi, how are you?
- `,
- ],
- [
- 'assistant',
- `
-not_needed
- `,
- ],
- [
- 'user',
- `
-
-
-What is Docker?
- `,
- ],
- [
- 'assistant',
- `
-What is Docker
- `,
- ],
- [
- 'user',
- `
-
-
-Can you tell me what is X from https://example.com
- `,
- ],
- [
- 'assistant',
- `
-What is X?
-
-
-https://example.com
- `,
- ],
- [
- 'user',
- `
-
-
-Summarize the content from https://example.com
- `,
- ],
- [
- 'assistant',
- `
-summarize
-
-
-https://example.com
- `,
- ],
-];
-
-export const webSearchResponsePrompt = `
- You are Perplexica, an AI model skilled in web search and crafting detailed, engaging, and well-structured answers. You excel at summarizing web pages and extracting relevant information to create professional, blog-style responses.
+export const getWriterPrompt = (
+ context: string,
+ systemInstructions: string,
+ mode: 'speed' | 'balanced' | 'quality',
+) => {
+ return `
+You are Perplexica, an AI model skilled in web search and crafting detailed, engaging, and well-structured answers. You excel at summarizing web pages and extracting relevant information to create professional, blog-style responses.
Your task is to provide answers that are:
- **Informative and relevant**: Thoroughly address the user's query using the given context.
@@ -118,10 +33,11 @@ export const webSearchResponsePrompt = `
- If the query involves technical, historical, or complex topics, provide detailed background and explanatory sections to ensure clarity.
- If the user provides vague input or if relevant information is missing, explain what additional details might help refine the search.
- If no relevant information is found, say: "Hmm, sorry I could not find any relevant information on this topic. Would you like me to search again or ask something else?" Be transparent about limitations and suggest alternatives or ways to reframe the query.
-
+ ${mode === 'quality' ? "- YOU ARE CURRENTLY SET IN QUALITY MODE, GENERATE VERY DEEP, DETAILED AND COMPREHENSIVE RESPONSES USING THE FULL CONTEXT PROVIDED. ASSISTANT'S RESPONSES SHALL NOT BE LESS THAN AT LEAST 2000 WORDS, COVER EVERYTHING AND FRAME IT LIKE A RESEARCH REPORT." : ''}
+
### User instructions
These instructions are shared to you by the user and not by the system. You will have to follow them but give them less priority than the above instructions. If the user has provided specific instructions or preferences, incorporate them into your response while adhering to the overall guidelines.
- {systemInstructions}
+ ${systemInstructions}
### Example Output
- Begin with a brief introduction summarizing the event or query topic.
@@ -130,8 +46,9 @@ export const webSearchResponsePrompt = `
- End with a conclusion or overall perspective if relevant.
- {context}
+ ${context}
- Current date & time in ISO format (UTC timezone) is: {date}.
+ Current date & time in ISO format (UTC timezone) is: ${new Date().toISOString()}.
`;
+};
diff --git a/src/lib/prompts/suggestions/index.ts b/src/lib/prompts/suggestions/index.ts
new file mode 100644
index 0000000..18922ba
--- /dev/null
+++ b/src/lib/prompts/suggestions/index.ts
@@ -0,0 +1,17 @@
+export const suggestionGeneratorPrompt = `
+You are an AI suggestion generator for an AI powered search engine. You will be given a conversation below. You need to generate 4-5 suggestions based on the conversation. The suggestion should be relevant to the conversation that can be used by the user to ask the chat model for more information.
+You need to make sure the suggestions are relevant to the conversation and are helpful to the user. Keep a note that the user might use these suggestions to ask a chat model for more information.
+Make sure the suggestions are medium in length and are informative and relevant to the conversation.
+
+Sample suggestions for a conversation about Elon Musk:
+{
+ "suggestions": [
+ "What are Elon Musk's plans for SpaceX in the next decade?",
+ "How has Tesla's stock performance been influenced by Elon Musk's leadership?",
+ "What are the key innovations introduced by Elon Musk in the electric vehicle industry?",
+ "How does Elon Musk's vision for renewable energy impact global sustainability efforts?"
+ ]
+}
+
+Today's date is ${new Date().toISOString()}
+`;
diff --git a/src/lib/prompts/writingAssistant.ts b/src/lib/prompts/writingAssistant.ts
deleted file mode 100644
index 565827a..0000000
--- a/src/lib/prompts/writingAssistant.ts
+++ /dev/null
@@ -1,17 +0,0 @@
-export const writingAssistantPrompt = `
-You are Perplexica, an AI model who is expert at searching the web and answering user's queries. You are currently set on focus mode 'Writing Assistant', this means you will be helping the user write a response to a given query.
-Since you are a writing assistant, you would not perform web searches. If you think you lack information to answer the query, you can ask the user for more information or suggest them to switch to a different focus mode.
-You will be shared a context that can contain information from files user has uploaded to get answers from. You will have to generate answers upon that.
-
-You have to cite the answer using [number] notation. You must cite the sentences with their relevent context number. You must cite each and every part of the answer so the user can know where the information is coming from.
-Place these citations at the end of that particular sentence. You can cite the same sentence multiple times if it is relevant to the user's query like [number1][number2].
-However you do not need to cite it using the same number. You can use different numbers to cite the same sentence multiple times. The number refers to the number of the search result (passed in the context) used to generate that part of the answer.
-
-### User instructions
-These instructions are shared to you by the user and not by the system. You will have to follow them but give them less priority than the above instructions. If the user has provided specific instructions or preferences, incorporate them into your response while adhering to the overall guidelines.
-{systemInstructions}
-
-
-{context}
-
-`;
diff --git a/src/lib/search/index.ts b/src/lib/search/index.ts
deleted file mode 100644
index 8eb8ab0..0000000
--- a/src/lib/search/index.ts
+++ /dev/null
@@ -1,59 +0,0 @@
-import MetaSearchAgent from '@/lib/search/metaSearchAgent';
-import prompts from '../prompts';
-
-export const searchHandlers: Record = {
- webSearch: new MetaSearchAgent({
- activeEngines: [],
- queryGeneratorPrompt: prompts.webSearchRetrieverPrompt,
- responsePrompt: prompts.webSearchResponsePrompt,
- queryGeneratorFewShots: prompts.webSearchRetrieverFewShots,
- rerank: true,
- rerankThreshold: 0.3,
- searchWeb: true,
- }),
- academicSearch: new MetaSearchAgent({
- activeEngines: ['arxiv', 'google scholar', 'pubmed'],
- queryGeneratorPrompt: prompts.webSearchRetrieverPrompt,
- responsePrompt: prompts.webSearchResponsePrompt,
- queryGeneratorFewShots: prompts.webSearchRetrieverFewShots,
- rerank: true,
- rerankThreshold: 0,
- searchWeb: true,
- }),
- writingAssistant: new MetaSearchAgent({
- activeEngines: [],
- queryGeneratorPrompt: '',
- queryGeneratorFewShots: [],
- responsePrompt: prompts.writingAssistantPrompt,
- rerank: true,
- rerankThreshold: 0,
- searchWeb: false,
- }),
- wolframAlphaSearch: new MetaSearchAgent({
- activeEngines: ['wolframalpha'],
- queryGeneratorPrompt: prompts.webSearchRetrieverPrompt,
- responsePrompt: prompts.webSearchResponsePrompt,
- queryGeneratorFewShots: prompts.webSearchRetrieverFewShots,
- rerank: false,
- rerankThreshold: 0,
- searchWeb: true,
- }),
- youtubeSearch: new MetaSearchAgent({
- activeEngines: ['youtube'],
- queryGeneratorPrompt: prompts.webSearchRetrieverPrompt,
- responsePrompt: prompts.webSearchResponsePrompt,
- queryGeneratorFewShots: prompts.webSearchRetrieverFewShots,
- rerank: true,
- rerankThreshold: 0.3,
- searchWeb: true,
- }),
- redditSearch: new MetaSearchAgent({
- activeEngines: ['reddit'],
- queryGeneratorPrompt: prompts.webSearchRetrieverPrompt,
- responsePrompt: prompts.webSearchResponsePrompt,
- queryGeneratorFewShots: prompts.webSearchRetrieverFewShots,
- rerank: true,
- rerankThreshold: 0.3,
- searchWeb: true,
- }),
-};
diff --git a/src/lib/search/metaSearchAgent.ts b/src/lib/search/metaSearchAgent.ts
deleted file mode 100644
index 1f72f79..0000000
--- a/src/lib/search/metaSearchAgent.ts
+++ /dev/null
@@ -1,514 +0,0 @@
-import { ChatOpenAI } from '@langchain/openai';
-import type { BaseChatModel } from '@langchain/core/language_models/chat_models';
-import type { Embeddings } from '@langchain/core/embeddings';
-import {
- ChatPromptTemplate,
- MessagesPlaceholder,
- PromptTemplate,
-} from '@langchain/core/prompts';
-import {
- RunnableLambda,
- RunnableMap,
- RunnableSequence,
-} from '@langchain/core/runnables';
-import { BaseMessage, BaseMessageLike } from '@langchain/core/messages';
-import { StringOutputParser } from '@langchain/core/output_parsers';
-import LineListOutputParser from '../outputParsers/listLineOutputParser';
-import LineOutputParser from '../outputParsers/lineOutputParser';
-import { getDocumentsFromLinks } from '../utils/documents';
-import { Document } from '@langchain/core/documents';
-import { searchSearxng } from '../searxng';
-import path from 'node:path';
-import fs from 'node:fs';
-import computeSimilarity from '../utils/computeSimilarity';
-import formatChatHistoryAsString from '../utils/formatHistory';
-import eventEmitter from 'events';
-import { StreamEvent } from '@langchain/core/tracers/log_stream';
-
-export interface MetaSearchAgentType {
- searchAndAnswer: (
- message: string,
- history: BaseMessage[],
- llm: BaseChatModel,
- embeddings: Embeddings,
- optimizationMode: 'speed' | 'balanced' | 'quality',
- fileIds: string[],
- systemInstructions: string,
- ) => Promise;
-}
-
-interface Config {
- searchWeb: boolean;
- rerank: boolean;
- rerankThreshold: number;
- queryGeneratorPrompt: string;
- queryGeneratorFewShots: BaseMessageLike[];
- responsePrompt: string;
- activeEngines: string[];
-}
-
-type BasicChainInput = {
- chat_history: BaseMessage[];
- query: string;
-};
-
-class MetaSearchAgent implements MetaSearchAgentType {
- private config: Config;
- private strParser = new StringOutputParser();
-
- constructor(config: Config) {
- this.config = config;
- }
-
- private async createSearchRetrieverChain(llm: BaseChatModel) {
- (llm as unknown as ChatOpenAI).temperature = 0;
-
- return RunnableSequence.from([
- ChatPromptTemplate.fromMessages([
- ['system', this.config.queryGeneratorPrompt],
- ...this.config.queryGeneratorFewShots,
- [
- 'user',
- `
-
- {chat_history}
-
-
-
- {query}
-
- `,
- ],
- ]),
- llm,
- this.strParser,
- RunnableLambda.from(async (input: string) => {
- const linksOutputParser = new LineListOutputParser({
- key: 'links',
- });
-
- const questionOutputParser = new LineOutputParser({
- key: 'question',
- });
-
- const links = await linksOutputParser.parse(input);
- let question = (await questionOutputParser.parse(input)) ?? input;
-
- if (question === 'not_needed') {
- return { query: '', docs: [] };
- }
-
- if (links.length > 0) {
- if (question.length === 0) {
- question = 'summarize';
- }
-
- let docs: Document[] = [];
-
- const linkDocs = await getDocumentsFromLinks({ links });
-
- const docGroups: Document[] = [];
-
- linkDocs.map((doc) => {
- const URLDocExists = docGroups.find(
- (d) =>
- d.metadata.url === doc.metadata.url &&
- d.metadata.totalDocs < 10,
- );
-
- if (!URLDocExists) {
- docGroups.push({
- ...doc,
- metadata: {
- ...doc.metadata,
- totalDocs: 1,
- },
- });
- }
-
- const docIndex = docGroups.findIndex(
- (d) =>
- d.metadata.url === doc.metadata.url &&
- d.metadata.totalDocs < 10,
- );
-
- if (docIndex !== -1) {
- docGroups[docIndex].pageContent =
- docGroups[docIndex].pageContent + `\n\n` + doc.pageContent;
- docGroups[docIndex].metadata.totalDocs += 1;
- }
- });
-
- await Promise.all(
- docGroups.map(async (doc) => {
- const res = await llm.invoke(`
- You are a web search summarizer, tasked with summarizing a piece of text retrieved from a web search. Your job is to summarize the
- text into a detailed, 2-4 paragraph explanation that captures the main ideas and provides a comprehensive answer to the query.
- If the query is \"summarize\", you should provide a detailed summary of the text. If the query is a specific question, you should answer it in the summary.
-
- - **Journalistic tone**: The summary should sound professional and journalistic, not too casual or vague.
- - **Thorough and detailed**: Ensure that every key point from the text is captured and that the summary directly answers the query.
- - **Not too lengthy, but detailed**: The summary should be informative but not excessively long. Focus on providing detailed information in a concise format.
-
- The text will be shared inside the \`text\` XML tag, and the query inside the \`query\` XML tag.
-
-
- 1. \`
- Docker is a set of platform-as-a-service products that use OS-level virtualization to deliver software in packages called containers.
- It was first released in 2013 and is developed by Docker, Inc. Docker is designed to make it easier to create, deploy, and run applications
- by using containers.
-
-
-
- What is Docker and how does it work?
-
-
- Response:
- Docker is a revolutionary platform-as-a-service product developed by Docker, Inc., that uses container technology to make application
- deployment more efficient. It allows developers to package their software with all necessary dependencies, making it easier to run in
- any environment. Released in 2013, Docker has transformed the way applications are built, deployed, and managed.
- \`
- 2. \`
- The theory of relativity, or simply relativity, encompasses two interrelated theories of Albert Einstein: special relativity and general
- relativity. However, the word "relativity" is sometimes used in reference to Galilean invariance. The term "theory of relativity" was based
- on the expression "relative theory" used by Max Planck in 1906. The theory of relativity usually encompasses two interrelated theories by
- Albert Einstein: special relativity and general relativity. Special relativity applies to all physical phenomena in the absence of gravity.
- General relativity explains the law of gravitation and its relation to other forces of nature. It applies to the cosmological and astrophysical
- realm, including astronomy.
-
-
-
- summarize
-
-
- Response:
- The theory of relativity, developed by Albert Einstein, encompasses two main theories: special relativity and general relativity. Special
- relativity applies to all physical phenomena in the absence of gravity, while general relativity explains the law of gravitation and its
- relation to other forces of nature. The theory of relativity is based on the concept of "relative theory," as introduced by Max Planck in
- 1906. It is a fundamental theory in physics that has revolutionized our understanding of the universe.
- \`
-
-
- Everything below is the actual data you will be working with. Good luck!
-
-
- ${question}
-
-
-
- ${doc.pageContent}
-
-
- Make sure to answer the query in the summary.
- `);
-
- const document = new Document({
- pageContent: res.content as string,
- metadata: {
- title: doc.metadata.title,
- url: doc.metadata.url,
- },
- });
-
- docs.push(document);
- }),
- );
-
- return { query: question, docs: docs };
- } else {
- question = question.replace(/.*?<\/think>/g, '');
-
- const res = await searchSearxng(question, {
- language: 'en',
- engines: this.config.activeEngines,
- });
-
- const documents = res.results.map(
- (result) =>
- new Document({
- pageContent:
- result.content ||
- (this.config.activeEngines.includes('youtube')
- ? result.title
- : '') /* Todo: Implement transcript grabbing using Youtubei (source: https://www.npmjs.com/package/youtubei) */,
- metadata: {
- title: result.title,
- url: result.url,
- ...(result.img_src && { img_src: result.img_src }),
- },
- }),
- );
-
- return { query: question, docs: documents };
- }
- }),
- ]);
- }
-
- private async createAnsweringChain(
- llm: BaseChatModel,
- fileIds: string[],
- embeddings: Embeddings,
- optimizationMode: 'speed' | 'balanced' | 'quality',
- systemInstructions: string,
- ) {
- return RunnableSequence.from([
- RunnableMap.from({
- systemInstructions: () => systemInstructions,
- query: (input: BasicChainInput) => input.query,
- chat_history: (input: BasicChainInput) => input.chat_history,
- date: () => new Date().toISOString(),
- context: RunnableLambda.from(async (input: BasicChainInput) => {
- const processedHistory = formatChatHistoryAsString(
- input.chat_history,
- );
-
- let docs: Document[] | null = null;
- let query = input.query;
-
- if (this.config.searchWeb) {
- const searchRetrieverChain =
- await this.createSearchRetrieverChain(llm);
-
- const searchRetrieverResult = await searchRetrieverChain.invoke({
- chat_history: processedHistory,
- query,
- });
-
- query = searchRetrieverResult.query;
- docs = searchRetrieverResult.docs;
- }
-
- const sortedDocs = await this.rerankDocs(
- query,
- docs ?? [],
- fileIds,
- embeddings,
- optimizationMode,
- );
-
- return sortedDocs;
- })
- .withConfig({
- runName: 'FinalSourceRetriever',
- })
- .pipe(this.processDocs),
- }),
- ChatPromptTemplate.fromMessages([
- ['system', this.config.responsePrompt],
- new MessagesPlaceholder('chat_history'),
- ['user', '{query}'],
- ]),
- llm,
- this.strParser,
- ]).withConfig({
- runName: 'FinalResponseGenerator',
- });
- }
-
- private async rerankDocs(
- query: string,
- docs: Document[],
- fileIds: string[],
- embeddings: Embeddings,
- optimizationMode: 'speed' | 'balanced' | 'quality',
- ) {
- if (docs.length === 0 && fileIds.length === 0) {
- return docs;
- }
-
- const filesData = fileIds
- .map((file) => {
- const filePath = path.join(process.cwd(), 'uploads', file);
-
- const contentPath = filePath + '-extracted.json';
- const embeddingsPath = filePath + '-embeddings.json';
-
- const content = JSON.parse(fs.readFileSync(contentPath, 'utf8'));
- const embeddings = JSON.parse(fs.readFileSync(embeddingsPath, 'utf8'));
-
- const fileSimilaritySearchObject = content.contents.map(
- (c: string, i: number) => {
- return {
- fileName: content.title,
- content: c,
- embeddings: embeddings.embeddings[i],
- };
- },
- );
-
- return fileSimilaritySearchObject;
- })
- .flat();
-
- if (query.toLocaleLowerCase() === 'summarize') {
- return docs.slice(0, 15);
- }
-
- const docsWithContent = docs.filter(
- (doc) => doc.pageContent && doc.pageContent.length > 0,
- );
-
- if (optimizationMode === 'speed' || this.config.rerank === false) {
- if (filesData.length > 0) {
- const [queryEmbedding] = await Promise.all([
- embeddings.embedQuery(query),
- ]);
-
- const fileDocs = filesData.map((fileData) => {
- return new Document({
- pageContent: fileData.content,
- metadata: {
- title: fileData.fileName,
- url: `File`,
- },
- });
- });
-
- const similarity = filesData.map((fileData, i) => {
- const sim = computeSimilarity(queryEmbedding, fileData.embeddings);
-
- return {
- index: i,
- similarity: sim,
- };
- });
-
- let sortedDocs = similarity
- .filter(
- (sim) => sim.similarity > (this.config.rerankThreshold ?? 0.3),
- )
- .sort((a, b) => b.similarity - a.similarity)
- .slice(0, 15)
- .map((sim) => fileDocs[sim.index]);
-
- sortedDocs =
- docsWithContent.length > 0 ? sortedDocs.slice(0, 8) : sortedDocs;
-
- return [
- ...sortedDocs,
- ...docsWithContent.slice(0, 15 - sortedDocs.length),
- ];
- } else {
- return docsWithContent.slice(0, 15);
- }
- } else if (optimizationMode === 'balanced') {
- const [docEmbeddings, queryEmbedding] = await Promise.all([
- embeddings.embedDocuments(
- docsWithContent.map((doc) => doc.pageContent),
- ),
- embeddings.embedQuery(query),
- ]);
-
- docsWithContent.push(
- ...filesData.map((fileData) => {
- return new Document({
- pageContent: fileData.content,
- metadata: {
- title: fileData.fileName,
- url: `File`,
- },
- });
- }),
- );
-
- docEmbeddings.push(...filesData.map((fileData) => fileData.embeddings));
-
- const similarity = docEmbeddings.map((docEmbedding, i) => {
- const sim = computeSimilarity(queryEmbedding, docEmbedding);
-
- return {
- index: i,
- similarity: sim,
- };
- });
-
- const sortedDocs = similarity
- .filter((sim) => sim.similarity > (this.config.rerankThreshold ?? 0.3))
- .sort((a, b) => b.similarity - a.similarity)
- .slice(0, 15)
- .map((sim) => docsWithContent[sim.index]);
-
- return sortedDocs;
- }
-
- return [];
- }
-
- private processDocs(docs: Document[]) {
- return docs
- .map(
- (_, index) =>
- `${index + 1}. ${docs[index].metadata.title} ${docs[index].pageContent}`,
- )
- .join('\n');
- }
-
- private async handleStream(
- stream: AsyncGenerator,
- emitter: eventEmitter,
- ) {
- for await (const event of stream) {
- if (
- event.event === 'on_chain_end' &&
- event.name === 'FinalSourceRetriever'
- ) {
- emitter.emit(
- 'data',
- JSON.stringify({ type: 'sources', data: event.data.output }),
- );
- }
- if (
- event.event === 'on_chain_stream' &&
- event.name === 'FinalResponseGenerator'
- ) {
- emitter.emit(
- 'data',
- JSON.stringify({ type: 'response', data: event.data.chunk }),
- );
- }
- if (
- event.event === 'on_chain_end' &&
- event.name === 'FinalResponseGenerator'
- ) {
- emitter.emit('end');
- }
- }
- }
-
- async searchAndAnswer(
- message: string,
- history: BaseMessage[],
- llm: BaseChatModel,
- embeddings: Embeddings,
- optimizationMode: 'speed' | 'balanced' | 'quality',
- fileIds: string[],
- systemInstructions: string,
- ) {
- const emitter = new eventEmitter();
-
- const answeringChain = await this.createAnsweringChain(
- llm,
- fileIds,
- embeddings,
- optimizationMode,
- systemInstructions,
- );
-
- const stream = answeringChain.streamEvents(
- {
- chat_history: history,
- query: message,
- },
- {
- version: 'v1',
- },
- );
-
- this.handleStream(stream, emitter);
-
- return emitter;
- }
-}
-
-export default MetaSearchAgent;
diff --git a/src/lib/session.ts b/src/lib/session.ts
new file mode 100644
index 0000000..4f74330
--- /dev/null
+++ b/src/lib/session.ts
@@ -0,0 +1,105 @@
+import { EventEmitter } from 'stream';
+import { applyPatch } from 'rfc6902';
+import { Block } from './types';
+
+const sessions =
+ (global as any)._sessionManagerSessions || new Map();
+if (process.env.NODE_ENV !== 'production') {
+ (global as any)._sessionManagerSessions = sessions;
+}
+
+class SessionManager {
+ private static sessions: Map = sessions;
+ readonly id: string;
+ private blocks = new Map();
+ private events: { event: string; data: any }[] = [];
+ private emitter = new EventEmitter();
+ private TTL_MS = 30 * 60 * 1000;
+
+ constructor(id?: string) {
+ this.id = id ?? crypto.randomUUID();
+
+ setTimeout(() => {
+ SessionManager.sessions.delete(this.id);
+ }, this.TTL_MS);
+ }
+
+ static getSession(id: string): SessionManager | undefined {
+ return this.sessions.get(id);
+ }
+
+ static getAllSessions(): SessionManager[] {
+ return Array.from(this.sessions.values());
+ }
+
+ static createSession(): SessionManager {
+ const session = new SessionManager();
+ this.sessions.set(session.id, session);
+ return session;
+ }
+
+ removeAllListeners() {
+ this.emitter.removeAllListeners();
+ }
+
+ emit(event: string, data: any) {
+ this.emitter.emit(event, data);
+ this.events.push({ event, data });
+ }
+
+ emitBlock(block: Block) {
+ this.blocks.set(block.id, block);
+ this.emit('data', {
+ type: 'block',
+ block: block,
+ });
+ }
+
+ getBlock(blockId: string): Block | undefined {
+ return this.blocks.get(blockId);
+ }
+
+ updateBlock(blockId: string, patch: any[]) {
+ const block = this.blocks.get(blockId);
+
+ if (block) {
+ applyPatch(block, patch);
+ this.blocks.set(blockId, block);
+ this.emit('data', {
+ type: 'updateBlock',
+ blockId: blockId,
+ patch: patch,
+ });
+ }
+ }
+
+ getAllBlocks() {
+ return Array.from(this.blocks.values());
+ }
+
+ subscribe(listener: (event: string, data: any) => void): () => void {
+ const currentEventsLength = this.events.length;
+
+ const handler = (event: string) => (data: any) => listener(event, data);
+ const dataHandler = handler('data');
+ const endHandler = handler('end');
+ const errorHandler = handler('error');
+
+ this.emitter.on('data', dataHandler);
+ this.emitter.on('end', endHandler);
+ this.emitter.on('error', errorHandler);
+
+ for (let i = 0; i < currentEventsLength; i++) {
+ const { event, data } = this.events[i];
+ listener(event, data);
+ }
+
+ return () => {
+ this.emitter.off('data', dataHandler);
+ this.emitter.off('end', endHandler);
+ this.emitter.off('error', errorHandler);
+ };
+ }
+}
+
+export default SessionManager;
diff --git a/src/lib/types.ts b/src/lib/types.ts
new file mode 100644
index 0000000..b78a07b
--- /dev/null
+++ b/src/lib/types.ts
@@ -0,0 +1,123 @@
+import { ToolCall } from './models/types';
+
+export type SystemMessage = {
+ role: 'system';
+ content: string;
+};
+
+export type AssistantMessage = {
+ role: 'assistant';
+ content: string;
+ tool_calls?: ToolCall[];
+};
+
+export type UserMessage = {
+ role: 'user';
+ content: string;
+};
+
+export type ToolMessage = {
+ role: 'tool';
+ id: string;
+ name: string;
+ content: string;
+};
+
+export type ChatTurnMessage = UserMessage | AssistantMessage;
+
+export type Message =
+ | UserMessage
+ | AssistantMessage
+ | SystemMessage
+ | ToolMessage;
+
+export type Chunk = {
+ content: string;
+ metadata: Record