mirror of
https://github.com/ItzCrazyKns/Perplexica.git
synced 2026-01-11 22:45:42 +00:00
Compare commits
38 Commits
edba47aed8
...
master
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d7b020e5bb | ||
|
|
d95ff9ccdd | ||
|
|
8347b798f3 | ||
|
|
a16472bcf3 | ||
|
|
3b8d8be676 | ||
|
|
b83f9bac78 | ||
|
|
bd7c563137 | ||
|
|
23b903db9a | ||
|
|
a98f0df83f | ||
|
|
164d528761 | ||
|
|
af4ec17117 | ||
|
|
1622e0893a | ||
|
|
55a4b9d436 | ||
|
|
b450d0e668 | ||
|
|
0987ee4370 | ||
|
|
d1bd22786d | ||
|
|
bb7b7170ca | ||
|
|
be7bd62a74 | ||
|
|
a691f3bab0 | ||
|
|
f1c9fa0e33 | ||
|
|
d872cf5009 | ||
|
|
fdef718980 | ||
|
|
19dde42f22 | ||
|
|
c9f6893d99 | ||
|
|
53e9859b6c | ||
|
|
53e39cd985 | ||
|
|
7f3f881964 | ||
|
|
9620e63e3f | ||
|
|
ec5ff6f4a8 | ||
|
|
0ace778b03 | ||
|
|
6919ad1a0f | ||
|
|
b5ba8c48c0 | ||
|
|
65fdecb122 | ||
|
|
5a44319d85 | ||
|
|
cc183cd0cd | ||
|
|
50ca7ac73a | ||
|
|
a31a4ab295 | ||
|
|
fdaa2f0646 |
@@ -11,33 +11,63 @@ Perplexica's codebase is organized as follows:
|
|||||||
- **UI Components and Pages**:
|
- **UI Components and Pages**:
|
||||||
- **Components (`src/components`)**: Reusable UI components.
|
- **Components (`src/components`)**: Reusable UI components.
|
||||||
- **Pages and Routes (`src/app`)**: Next.js app directory structure with page components.
|
- **Pages and Routes (`src/app`)**: Next.js app directory structure with page components.
|
||||||
- Main app routes include: home (`/`), chat (`/c`), discover (`/discover`), library (`/library`), and settings (`/settings`).
|
- Main app routes include: home (`/`), chat (`/c`), discover (`/discover`), and library (`/library`).
|
||||||
- **API Routes (`src/app/api`)**: API endpoints implemented with Next.js API routes.
|
- **API Routes (`src/app/api`)**: Server endpoints implemented with Next.js route handlers.
|
||||||
- `/api/chat`: Handles chat interactions.
|
|
||||||
- `/api/search`: Provides direct access to Perplexica's search capabilities.
|
|
||||||
- Other endpoints for models, files, and suggestions.
|
|
||||||
- **Backend Logic (`src/lib`)**: Contains all the backend functionality including search, database, and API logic.
|
- **Backend Logic (`src/lib`)**: Contains all the backend functionality including search, database, and API logic.
|
||||||
- The search functionality is present inside `src/lib/search` directory.
|
- The search system lives in `src/lib/agents/search`.
|
||||||
- All of the focus modes are implemented using the Meta Search Agent class in `src/lib/search/metaSearchAgent.ts`.
|
- The search pipeline is split into classification, research, widgets, and writing.
|
||||||
- Database functionality is in `src/lib/db`.
|
- Database functionality is in `src/lib/db`.
|
||||||
- Chat model and embedding model providers are managed in `src/lib/providers`.
|
- Chat model and embedding model providers are in `src/lib/models/providers`, and models are loaded via `src/lib/models/registry.ts`.
|
||||||
- Prompt templates and LLM chain definitions are in `src/lib/prompts` and `src/lib/chains` respectively.
|
- Prompt templates are in `src/lib/prompts`.
|
||||||
|
- SearXNG integration is in `src/lib/searxng.ts`.
|
||||||
|
- Upload search lives in `src/lib/uploads`.
|
||||||
|
|
||||||
|
### Where to make changes
|
||||||
|
|
||||||
|
If you are not sure where to start, use this section as a map.
|
||||||
|
|
||||||
|
- **Search behavior and reasoning**
|
||||||
|
|
||||||
|
- `src/lib/agents/search` contains the core chat and search pipeline.
|
||||||
|
- `classifier.ts` decides whether research is needed and what should run.
|
||||||
|
- `researcher/` gathers information in the background.
|
||||||
|
|
||||||
|
- **Add or change a search capability**
|
||||||
|
|
||||||
|
- Research tools (web, academic, discussions, uploads, scraping) live in `src/lib/agents/search/researcher/actions`.
|
||||||
|
- Tools are registered in `src/lib/agents/search/researcher/actions/index.ts`.
|
||||||
|
|
||||||
|
- **Add or change widgets**
|
||||||
|
|
||||||
|
- Widgets live in `src/lib/agents/search/widgets`.
|
||||||
|
- Widgets run in parallel with research and show structured results in the UI.
|
||||||
|
|
||||||
|
- **Model integrations**
|
||||||
|
|
||||||
|
- Providers live in `src/lib/models/providers`.
|
||||||
|
- Add new providers there and wire them into the model registry so they show up in the app.
|
||||||
|
|
||||||
|
- **Architecture docs**
|
||||||
|
- High level overview: `docs/architecture/README.md`
|
||||||
|
- High level flow: `docs/architecture/WORKING.md`
|
||||||
|
|
||||||
## API Documentation
|
## API Documentation
|
||||||
|
|
||||||
Perplexica exposes several API endpoints for programmatic access, including:
|
Perplexica includes API documentation for programmatic access.
|
||||||
|
|
||||||
- **Search API**: Access Perplexica's advanced search capabilities directly via the `/api/search` endpoint. For detailed documentation, see `docs/api/search.md`.
|
- **Search API**: For detailed documentation, see `docs/API/SEARCH.md`.
|
||||||
|
|
||||||
## Setting Up Your Environment
|
## Setting Up Your Environment
|
||||||
|
|
||||||
Before diving into coding, setting up your local environment is key. Here's what you need to do:
|
Before diving into coding, setting up your local environment is key. Here's what you need to do:
|
||||||
|
|
||||||
1. In the root directory, locate the `sample.config.toml` file.
|
1. Run `npm install` to install all dependencies.
|
||||||
2. Rename it to `config.toml` and fill in the necessary configuration fields.
|
2. Use `npm run dev` to start the application in development mode.
|
||||||
3. Run `npm install` to install all dependencies.
|
3. Open http://localhost:3000 and complete the setup in the UI (API keys, models, search backend URL, etc.).
|
||||||
4. Run `npm run db:migrate` to set up the local sqlite database.
|
|
||||||
5. Use `npm run dev` to start the application in development mode.
|
Database migrations are applied automatically on startup.
|
||||||
|
|
||||||
|
For full installation options (Docker and non Docker), see the installation guide in the repository README.
|
||||||
|
|
||||||
**Please note**: Docker configurations are present for setting up production environments, whereas `npm run dev` is used for development purposes.
|
**Please note**: Docker configurations are present for setting up production environments, whereas `npm run dev` is used for development purposes.
|
||||||
|
|
||||||
|
|||||||
16
README.md
16
README.md
@@ -18,9 +18,11 @@ Want to know more about its architecture and how it works? You can read it [here
|
|||||||
|
|
||||||
🤖 **Support for all major AI providers** - Use local LLMs through Ollama or connect to OpenAI, Anthropic Claude, Google Gemini, Groq, and more. Mix and match models based on your needs.
|
🤖 **Support for all major AI providers** - Use local LLMs through Ollama or connect to OpenAI, Anthropic Claude, Google Gemini, Groq, and more. Mix and match models based on your needs.
|
||||||
|
|
||||||
⚡ **Smart search modes** - Choose Balanced Mode for everyday searches, Fast Mode when you need quick answers, or wait for Quality Mode (coming soon) for deep research.
|
⚡ **Smart search modes** - Choose Speed Mode when you need quick answers, Balanced Mode for everyday searches, or Quality Mode for deep research.
|
||||||
|
|
||||||
🎯 **Six specialized focus modes** - Get better results with modes designed for specific tasks: Academic papers, YouTube videos, Reddit discussions, Wolfram Alpha calculations, writing assistance, or general web search.
|
🧭 **Pick your sources** - Search the web, discussions, or academic papers. More sources and integrations are in progress.
|
||||||
|
|
||||||
|
🧩 **Widgets** - Helpful UI cards that show up when relevant, like weather, calculations, stock prices, and other quick lookups.
|
||||||
|
|
||||||
🔍 **Web search powered by SearxNG** - Access multiple search engines while keeping your identity private. Support for Tavily and Exa coming soon for even better results.
|
🔍 **Web search powered by SearxNG** - Access multiple search engines while keeping your identity private. Support for Tavily and Exa coming soon for even better results.
|
||||||
|
|
||||||
@@ -237,13 +239,9 @@ Perplexica runs on Next.js and handles all API requests. It works right away on
|
|||||||
|
|
||||||
## Upcoming Features
|
## Upcoming Features
|
||||||
|
|
||||||
- [x] Add settings page
|
- [ ] Adding more widgets, integrations, search sources
|
||||||
- [x] Adding support for local LLMs
|
- [ ] Adding ability to create custom agents (name T.B.D.)
|
||||||
- [x] History Saving features
|
- [ ] Adding authentication
|
||||||
- [x] Introducing various Focus Modes
|
|
||||||
- [x] Adding API support
|
|
||||||
- [x] Adding Discover
|
|
||||||
- [ ] Finalizing Copilot Mode
|
|
||||||
|
|
||||||
## Support Us
|
## Support Us
|
||||||
|
|
||||||
|
|||||||
@@ -7,11 +7,8 @@ services:
|
|||||||
- '3000:3000'
|
- '3000:3000'
|
||||||
volumes:
|
volumes:
|
||||||
- data:/home/perplexica/data
|
- data:/home/perplexica/data
|
||||||
- uploads:/home/perplexica/uploads
|
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
|
|
||||||
volumes:
|
volumes:
|
||||||
data:
|
data:
|
||||||
name: 'perplexica-data'
|
name: 'perplexica-data'
|
||||||
uploads:
|
|
||||||
name: 'perplexica-uploads'
|
|
||||||
|
|||||||
@@ -57,7 +57,7 @@ Use the `id` field as the `providerId` and the `key` field from the models array
|
|||||||
|
|
||||||
### Request
|
### Request
|
||||||
|
|
||||||
The API accepts a JSON object in the request body, where you define the focus mode, chat models, embedding models, and your query.
|
The API accepts a JSON object in the request body, where you define the enabled search `sources`, chat models, embedding models, and your query.
|
||||||
|
|
||||||
#### Request Body Structure
|
#### Request Body Structure
|
||||||
|
|
||||||
@@ -72,7 +72,7 @@ The API accepts a JSON object in the request body, where you define the focus mo
|
|||||||
"key": "text-embedding-3-large"
|
"key": "text-embedding-3-large"
|
||||||
},
|
},
|
||||||
"optimizationMode": "speed",
|
"optimizationMode": "speed",
|
||||||
"focusMode": "webSearch",
|
"sources": ["web"],
|
||||||
"query": "What is Perplexica",
|
"query": "What is Perplexica",
|
||||||
"history": [
|
"history": [
|
||||||
["human", "Hi, how are you?"],
|
["human", "Hi, how are you?"],
|
||||||
@@ -87,24 +87,25 @@ The API accepts a JSON object in the request body, where you define the focus mo
|
|||||||
|
|
||||||
### Request Parameters
|
### Request Parameters
|
||||||
|
|
||||||
- **`chatModel`** (object, optional): Defines the chat model to be used for the query. To get available providers and models, send a GET request to `http://localhost:3000/api/providers`.
|
- **`chatModel`** (object, required): Defines the chat model to be used for the query. To get available providers and models, send a GET request to `http://localhost:3000/api/providers`.
|
||||||
|
|
||||||
- `providerId` (string): The UUID of the provider. You can get this from the `/api/providers` endpoint response.
|
- `providerId` (string): The UUID of the provider. You can get this from the `/api/providers` endpoint response.
|
||||||
- `key` (string): The model key/identifier (e.g., `gpt-4o-mini`, `llama3.1:latest`). Use the `key` value from the provider's `chatModels` array, not the display name.
|
- `key` (string): The model key/identifier (e.g., `gpt-4o-mini`, `llama3.1:latest`). Use the `key` value from the provider's `chatModels` array, not the display name.
|
||||||
|
|
||||||
- **`embeddingModel`** (object, optional): Defines the embedding model for similarity-based searching. To get available providers and models, send a GET request to `http://localhost:3000/api/providers`.
|
- **`embeddingModel`** (object, required): Defines the embedding model for similarity-based searching. To get available providers and models, send a GET request to `http://localhost:3000/api/providers`.
|
||||||
|
|
||||||
- `providerId` (string): The UUID of the embedding provider. You can get this from the `/api/providers` endpoint response.
|
- `providerId` (string): The UUID of the embedding provider. You can get this from the `/api/providers` endpoint response.
|
||||||
- `key` (string): The embedding model key (e.g., `text-embedding-3-large`, `nomic-embed-text`). Use the `key` value from the provider's `embeddingModels` array, not the display name.
|
- `key` (string): The embedding model key (e.g., `text-embedding-3-large`, `nomic-embed-text`). Use the `key` value from the provider's `embeddingModels` array, not the display name.
|
||||||
|
|
||||||
- **`focusMode`** (string, required): Specifies which focus mode to use. Available modes:
|
- **`sources`** (array, required): Which search sources to enable. Available values:
|
||||||
|
|
||||||
- `webSearch`, `academicSearch`, `writingAssistant`, `wolframAlphaSearch`, `youtubeSearch`, `redditSearch`.
|
- `web`, `academic`, `discussions`.
|
||||||
|
|
||||||
- **`optimizationMode`** (string, optional): Specifies the optimization mode to control the balance between performance and quality. Available modes:
|
- **`optimizationMode`** (string, optional): Specifies the optimization mode to control the balance between performance and quality. Available modes:
|
||||||
|
|
||||||
- `speed`: Prioritize speed and return the fastest answer.
|
- `speed`: Prioritize speed and return the fastest answer.
|
||||||
- `balanced`: Provide a balanced answer with good speed and reasonable quality.
|
- `balanced`: Provide a balanced answer with good speed and reasonable quality.
|
||||||
|
- `quality`: Prioritize answer quality (may be slower).
|
||||||
|
|
||||||
- **`query`** (string, required): The search query or question.
|
- **`query`** (string, required): The search query or question.
|
||||||
|
|
||||||
@@ -132,14 +133,14 @@ The response from the API includes both the final message and the sources used t
|
|||||||
"message": "Perplexica is an innovative, open-source AI-powered search engine designed to enhance the way users search for information online. Here are some key features and characteristics of Perplexica:\n\n- **AI-Powered Technology**: It utilizes advanced machine learning algorithms to not only retrieve information but also to understand the context and intent behind user queries, providing more relevant results [1][5].\n\n- **Open-Source**: Being open-source, Perplexica offers flexibility and transparency, allowing users to explore its functionalities without the constraints of proprietary software [3][10].",
|
"message": "Perplexica is an innovative, open-source AI-powered search engine designed to enhance the way users search for information online. Here are some key features and characteristics of Perplexica:\n\n- **AI-Powered Technology**: It utilizes advanced machine learning algorithms to not only retrieve information but also to understand the context and intent behind user queries, providing more relevant results [1][5].\n\n- **Open-Source**: Being open-source, Perplexica offers flexibility and transparency, allowing users to explore its functionalities without the constraints of proprietary software [3][10].",
|
||||||
"sources": [
|
"sources": [
|
||||||
{
|
{
|
||||||
"pageContent": "Perplexica is an innovative, open-source AI-powered search engine designed to enhance the way users search for information online.",
|
"content": "Perplexica is an innovative, open-source AI-powered search engine designed to enhance the way users search for information online.",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"title": "What is Perplexica, and how does it function as an AI-powered search ...",
|
"title": "What is Perplexica, and how does it function as an AI-powered search ...",
|
||||||
"url": "https://askai.glarity.app/search/What-is-Perplexica--and-how-does-it-function-as-an-AI-powered-search-engine"
|
"url": "https://askai.glarity.app/search/What-is-Perplexica--and-how-does-it-function-as-an-AI-powered-search-engine"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"pageContent": "Perplexica is an open-source AI-powered search tool that dives deep into the internet to find precise answers.",
|
"content": "Perplexica is an open-source AI-powered search tool that dives deep into the internet to find precise answers.",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"title": "Sahar Mor's Post",
|
"title": "Sahar Mor's Post",
|
||||||
"url": "https://www.linkedin.com/posts/sahar-mor_a-new-open-source-project-called-perplexica-activity-7204489745668694016-ncja"
|
"url": "https://www.linkedin.com/posts/sahar-mor_a-new-open-source-project-called-perplexica-activity-7204489745668694016-ncja"
|
||||||
@@ -158,7 +159,7 @@ Example of streamed response objects:
|
|||||||
|
|
||||||
```
|
```
|
||||||
{"type":"init","data":"Stream connected"}
|
{"type":"init","data":"Stream connected"}
|
||||||
{"type":"sources","data":[{"pageContent":"...","metadata":{"title":"...","url":"..."}},...]}
|
{"type":"sources","data":[{"content":"...","metadata":{"title":"...","url":"..."}},...]}
|
||||||
{"type":"response","data":"Perplexica is an "}
|
{"type":"response","data":"Perplexica is an "}
|
||||||
{"type":"response","data":"innovative, open-source "}
|
{"type":"response","data":"innovative, open-source "}
|
||||||
{"type":"response","data":"AI-powered search engine..."}
|
{"type":"response","data":"AI-powered search engine..."}
|
||||||
@@ -174,9 +175,9 @@ Clients should process each line as a separate JSON object. The different messag
|
|||||||
|
|
||||||
### Fields in the Response
|
### Fields in the Response
|
||||||
|
|
||||||
- **`message`** (string): The search result, generated based on the query and focus mode.
|
- **`message`** (string): The search result, generated based on the query and enabled `sources`.
|
||||||
- **`sources`** (array): A list of sources that were used to generate the search result. Each source includes:
|
- **`sources`** (array): A list of sources that were used to generate the search result. Each source includes:
|
||||||
- `pageContent`: A snippet of the relevant content from the source.
|
- `content`: A snippet of the relevant content from the source.
|
||||||
- `metadata`: Metadata about the source, including:
|
- `metadata`: Metadata about the source, including:
|
||||||
- `title`: The title of the webpage.
|
- `title`: The title of the webpage.
|
||||||
- `url`: The URL of the webpage.
|
- `url`: The URL of the webpage.
|
||||||
@@ -185,5 +186,5 @@ Clients should process each line as a separate JSON object. The different messag
|
|||||||
|
|
||||||
If an error occurs during the search process, the API will return an appropriate error message with an HTTP status code.
|
If an error occurs during the search process, the API will return an appropriate error message with an HTTP status code.
|
||||||
|
|
||||||
- **400**: If the request is malformed or missing required fields (e.g., no focus mode or query).
|
- **400**: If the request is malformed or missing required fields (e.g., no `sources` or `query`).
|
||||||
- **500**: If an internal server error occurs during the search.
|
- **500**: If an internal server error occurs during the search.
|
||||||
|
|||||||
@@ -1,11 +1,38 @@
|
|||||||
# Perplexica's Architecture
|
# Perplexica Architecture
|
||||||
|
|
||||||
Perplexica's architecture consists of the following key components:
|
Perplexica is a Next.js application that combines an AI chat experience with search.
|
||||||
|
|
||||||
1. **User Interface**: A web-based interface that allows users to interact with Perplexica for searching images, videos, and much more.
|
For a high level flow, see [WORKING.md](WORKING.md). For deeper implementation details, see [CONTRIBUTING.md](../../CONTRIBUTING.md).
|
||||||
2. **Agent/Chains**: These components predict Perplexica's next actions, understand user queries, and decide whether a web search is necessary.
|
|
||||||
3. **SearXNG**: A metadata search engine used by Perplexica to search the web for sources.
|
|
||||||
4. **LLMs (Large Language Models)**: Utilized by agents and chains for tasks like understanding content, writing responses, and citing sources. Examples include Claude, GPTs, etc.
|
|
||||||
5. **Embedding Models**: To improve the accuracy of search results, embedding models re-rank the results using similarity search algorithms such as cosine similarity and dot product distance.
|
|
||||||
|
|
||||||
For a more detailed explanation of how these components work together, see [WORKING.md](https://github.com/ItzCrazyKns/Perplexica/tree/master/docs/architecture/WORKING.md).
|
## Key components
|
||||||
|
|
||||||
|
1. **User Interface**
|
||||||
|
|
||||||
|
- A web based UI that lets users chat, search, and view citations.
|
||||||
|
|
||||||
|
2. **API Routes**
|
||||||
|
|
||||||
|
- `POST /api/chat` powers the chat UI.
|
||||||
|
- `POST /api/search` provides a programmatic search endpoint.
|
||||||
|
- `GET /api/providers` lists available providers and model keys.
|
||||||
|
|
||||||
|
3. **Agents and Orchestration**
|
||||||
|
|
||||||
|
- The system classifies the question first.
|
||||||
|
- It can run research and widgets in parallel.
|
||||||
|
- It generates the final answer and includes citations.
|
||||||
|
|
||||||
|
4. **Search Backend**
|
||||||
|
|
||||||
|
- A meta search backend is used to fetch relevant web results when research is enabled.
|
||||||
|
|
||||||
|
5. **LLMs (Large Language Models)**
|
||||||
|
|
||||||
|
- Used for classification, writing answers, and producing citations.
|
||||||
|
|
||||||
|
6. **Embedding Models**
|
||||||
|
|
||||||
|
- Used for semantic search over user uploaded files.
|
||||||
|
|
||||||
|
7. **Storage**
|
||||||
|
- Chats and messages are stored so conversations can be reloaded.
|
||||||
|
|||||||
@@ -1,19 +1,72 @@
|
|||||||
# How does Perplexica work?
|
# How Perplexica Works
|
||||||
|
|
||||||
Curious about how Perplexica works? Don't worry, we'll cover it here. Before we begin, make sure you've read about the architecture of Perplexica to ensure you understand what it's made up of. Haven't read it? You can read it [here](https://github.com/ItzCrazyKns/Perplexica/tree/master/docs/architecture/README.md).
|
This is a high level overview of how Perplexica answers a question.
|
||||||
|
|
||||||
We'll understand how Perplexica works by taking an example of a scenario where a user asks: "How does an A.C. work?". We'll break down the process into steps to make it easier to understand. The steps are as follows:
|
If you want a component level overview, see [README.md](README.md).
|
||||||
|
|
||||||
1. The message is sent to the `/api/chat` route where it invokes the chain. The chain will depend on your focus mode. For this example, let's assume we use the "webSearch" focus mode.
|
If you want implementation details, see [CONTRIBUTING.md](../../CONTRIBUTING.md).
|
||||||
2. The chain is now invoked; first, the message is passed to another chain where it first predicts (using the chat history and the question) whether there is a need for sources and searching the web. If there is, it will generate a query (in accordance with the chat history) for searching the web that we'll take up later. If not, the chain will end there, and then the answer generator chain, also known as the response generator, will be started.
|
|
||||||
3. The query returned by the first chain is passed to SearXNG to search the web for information.
|
|
||||||
4. After the information is retrieved, it is based on keyword-based search. We then convert the information into embeddings and the query as well, then we perform a similarity search to find the most relevant sources to answer the query.
|
|
||||||
5. After all this is done, the sources are passed to the response generator. This chain takes all the chat history, the query, and the sources. It generates a response that is streamed to the UI.
|
|
||||||
|
|
||||||
## How are the answers cited?
|
## What happens when you ask a question
|
||||||
|
|
||||||
The LLMs are prompted to do so. We've prompted them so well that they cite the answers themselves, and using some UI magic, we display it to the user.
|
When you send a message in the UI, the app calls `POST /api/chat`.
|
||||||
|
|
||||||
## Image and Video Search
|
At a high level, we do three things:
|
||||||
|
|
||||||
Image and video searches are conducted in a similar manner. A query is always generated first, then we search the web for images and videos that match the query. These results are then returned to the user.
|
1. Classify the question and decide what to do next.
|
||||||
|
2. Run research and widgets in parallel.
|
||||||
|
3. Write the final answer and include citations.
|
||||||
|
|
||||||
|
## Classification
|
||||||
|
|
||||||
|
Before searching or answering, we run a classification step.
|
||||||
|
|
||||||
|
This step decides things like:
|
||||||
|
|
||||||
|
- Whether we should do research for this question
|
||||||
|
- Whether we should show any widgets
|
||||||
|
- How to rewrite the question into a clearer standalone form
|
||||||
|
|
||||||
|
## Widgets
|
||||||
|
|
||||||
|
Widgets are small, structured helpers that can run alongside research.
|
||||||
|
|
||||||
|
Examples include weather, stocks, and simple calculations.
|
||||||
|
|
||||||
|
If a widget is relevant, we show it in the UI while the answer is still being generated.
|
||||||
|
|
||||||
|
Widgets are helpful context for the answer, but they are not part of what the model should cite.
|
||||||
|
|
||||||
|
## Research
|
||||||
|
|
||||||
|
If research is needed, we gather information in the background while widgets can run.
|
||||||
|
|
||||||
|
Depending on configuration, research may include web lookup and searching user uploaded files.
|
||||||
|
|
||||||
|
## Answer generation
|
||||||
|
|
||||||
|
Once we have enough context, the chat model generates the final response.
|
||||||
|
|
||||||
|
You can control the tradeoff between speed and quality using `optimizationMode`:
|
||||||
|
|
||||||
|
- `speed`
|
||||||
|
- `balanced`
|
||||||
|
- `quality`
|
||||||
|
|
||||||
|
## How citations work
|
||||||
|
|
||||||
|
We prompt the model to cite the references it used. The UI then renders those citations alongside the supporting links.
|
||||||
|
|
||||||
|
## Search API
|
||||||
|
|
||||||
|
If you are integrating Perplexica into another product, you can call `POST /api/search`.
|
||||||
|
|
||||||
|
It returns:
|
||||||
|
|
||||||
|
- `message`: the generated answer
|
||||||
|
- `sources`: supporting references used for the answer
|
||||||
|
|
||||||
|
You can also enable streaming by setting `stream: true`.
|
||||||
|
|
||||||
|
## Image and video search
|
||||||
|
|
||||||
|
Image and video search use separate endpoints (`POST /api/images` and `POST /api/videos`). We generate a focused query using the chat model, then fetch matching results from a search backend.
|
||||||
|
|||||||
@@ -28,8 +28,8 @@
|
|||||||
"notNull": true,
|
"notNull": true,
|
||||||
"autoincrement": false
|
"autoincrement": false
|
||||||
},
|
},
|
||||||
"focusMode": {
|
"sources": {
|
||||||
"name": "focusMode",
|
"name": "sources",
|
||||||
"type": "text",
|
"type": "text",
|
||||||
"primaryKey": false,
|
"primaryKey": false,
|
||||||
"notNull": true,
|
"notNull": true,
|
||||||
|
|||||||
2
next-env.d.ts
vendored
2
next-env.d.ts
vendored
@@ -1,6 +1,6 @@
|
|||||||
/// <reference types="next" />
|
/// <reference types="next" />
|
||||||
/// <reference types="next/image-types/global" />
|
/// <reference types="next/image-types/global" />
|
||||||
import "./.next/dev/types/routes.d.ts";
|
import './.next/dev/types/routes.d.ts';
|
||||||
|
|
||||||
// NOTE: This file should not be edited
|
// NOTE: This file should not be edited
|
||||||
// see https://nextjs.org/docs/app/api-reference/config/typescript for more information.
|
// see https://nextjs.org/docs/app/api-reference/config/typescript for more information.
|
||||||
|
|||||||
@@ -11,6 +11,13 @@ const nextConfig = {
|
|||||||
],
|
],
|
||||||
},
|
},
|
||||||
serverExternalPackages: ['pdf-parse'],
|
serverExternalPackages: ['pdf-parse'],
|
||||||
|
outputFileTracingIncludes: {
|
||||||
|
'/api/**': [
|
||||||
|
'./node_modules/@napi-rs/canvas/**',
|
||||||
|
'./node_modules/@napi-rs/canvas-linux-x64-gnu/**',
|
||||||
|
'./node_modules/@napi-rs/canvas-linux-x64-musl/**',
|
||||||
|
],
|
||||||
|
},
|
||||||
env: {
|
env: {
|
||||||
NEXT_PUBLIC_VERSION: pkg.version,
|
NEXT_PUBLIC_VERSION: pkg.version,
|
||||||
},
|
},
|
||||||
|
|||||||
14
package.json
14
package.json
@@ -1,11 +1,11 @@
|
|||||||
{
|
{
|
||||||
"name": "perplexica-frontend",
|
"name": "perplexica",
|
||||||
"version": "1.11.2",
|
"version": "1.12.1",
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"author": "ItzCrazyKns",
|
"author": "ItzCrazyKns",
|
||||||
"scripts": {
|
"scripts": {
|
||||||
"dev": "next dev",
|
"dev": "next dev --webpack",
|
||||||
"build": "next build",
|
"build": "next build --webpack",
|
||||||
"start": "next start",
|
"start": "next start",
|
||||||
"lint": "next lint",
|
"lint": "next lint",
|
||||||
"format:write": "prettier . --write"
|
"format:write": "prettier . --write"
|
||||||
@@ -19,7 +19,7 @@
|
|||||||
"@phosphor-icons/react": "^2.1.10",
|
"@phosphor-icons/react": "^2.1.10",
|
||||||
"@radix-ui/react-tooltip": "^1.2.8",
|
"@radix-ui/react-tooltip": "^1.2.8",
|
||||||
"@tailwindcss/typography": "^0.5.12",
|
"@tailwindcss/typography": "^0.5.12",
|
||||||
"@types/jspdf": "^2.0.0",
|
"@toolsycc/json-repair": "^0.1.22",
|
||||||
"axios": "^1.8.3",
|
"axios": "^1.8.3",
|
||||||
"better-sqlite3": "^11.9.1",
|
"better-sqlite3": "^11.9.1",
|
||||||
"clsx": "^2.1.0",
|
"clsx": "^2.1.0",
|
||||||
@@ -54,6 +54,7 @@
|
|||||||
},
|
},
|
||||||
"devDependencies": {
|
"devDependencies": {
|
||||||
"@types/better-sqlite3": "^7.6.12",
|
"@types/better-sqlite3": "^7.6.12",
|
||||||
|
"@types/jspdf": "^2.0.0",
|
||||||
"@types/node": "^24.8.1",
|
"@types/node": "^24.8.1",
|
||||||
"@types/pdf-parse": "^1.1.4",
|
"@types/pdf-parse": "^1.1.4",
|
||||||
"@types/react": "^18",
|
"@types/react": "^18",
|
||||||
@@ -68,5 +69,8 @@
|
|||||||
"prettier": "^3.2.5",
|
"prettier": "^3.2.5",
|
||||||
"tailwindcss": "^3.3.0",
|
"tailwindcss": "^3.3.0",
|
||||||
"typescript": "^5.9.3"
|
"typescript": "^5.9.3"
|
||||||
|
},
|
||||||
|
"optionalDependencies": {
|
||||||
|
"@napi-rs/canvas": "^0.1.87"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -21,7 +21,10 @@ export const POST = async (req: Request) => {
|
|||||||
|
|
||||||
const images = await searchImages(
|
const images = await searchImages(
|
||||||
{
|
{
|
||||||
chatHistory: body.chatHistory,
|
chatHistory: body.chatHistory.map(([role, content]) => ({
|
||||||
|
role: role === 'human' ? 'user' : 'assistant',
|
||||||
|
content,
|
||||||
|
})),
|
||||||
query: body.query,
|
query: body.query,
|
||||||
},
|
},
|
||||||
llm,
|
llm,
|
||||||
|
|||||||
@@ -1,12 +1,13 @@
|
|||||||
import ModelRegistry from '@/lib/models/registry';
|
import ModelRegistry from '@/lib/models/registry';
|
||||||
import { ModelWithProvider } from '@/lib/models/types';
|
import { ModelWithProvider } from '@/lib/models/types';
|
||||||
import SessionManager from '@/lib/session';
|
import SessionManager from '@/lib/session';
|
||||||
import SearchAgent from '@/lib/agents/search';
|
|
||||||
import { ChatTurnMessage } from '@/lib/types';
|
import { ChatTurnMessage } from '@/lib/types';
|
||||||
|
import { SearchSources } from '@/lib/agents/search/types';
|
||||||
|
import APISearchAgent from '@/lib/agents/search/api';
|
||||||
|
|
||||||
interface ChatRequestBody {
|
interface ChatRequestBody {
|
||||||
optimizationMode: 'speed' | 'balanced';
|
optimizationMode: 'speed' | 'balanced' | 'quality';
|
||||||
focusMode: string;
|
sources: SearchSources[];
|
||||||
chatModel: ModelWithProvider;
|
chatModel: ModelWithProvider;
|
||||||
embeddingModel: ModelWithProvider;
|
embeddingModel: ModelWithProvider;
|
||||||
query: string;
|
query: string;
|
||||||
@@ -19,15 +20,15 @@ export const POST = async (req: Request) => {
|
|||||||
try {
|
try {
|
||||||
const body: ChatRequestBody = await req.json();
|
const body: ChatRequestBody = await req.json();
|
||||||
|
|
||||||
if (!body.focusMode || !body.query) {
|
if (!body.sources || !body.query) {
|
||||||
return Response.json(
|
return Response.json(
|
||||||
{ message: 'Missing focus mode or query' },
|
{ message: 'Missing sources or query' },
|
||||||
{ status: 400 },
|
{ status: 400 },
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
body.history = body.history || [];
|
body.history = body.history || [];
|
||||||
body.optimizationMode = body.optimizationMode || 'balanced';
|
body.optimizationMode = body.optimizationMode || 'speed';
|
||||||
body.stream = body.stream || false;
|
body.stream = body.stream || false;
|
||||||
|
|
||||||
const registry = new ModelRegistry();
|
const registry = new ModelRegistry();
|
||||||
@@ -48,18 +49,21 @@ export const POST = async (req: Request) => {
|
|||||||
|
|
||||||
const session = SessionManager.createSession();
|
const session = SessionManager.createSession();
|
||||||
|
|
||||||
const agent = new SearchAgent();
|
const agent = new APISearchAgent();
|
||||||
|
|
||||||
agent.searchAsync(session, {
|
agent.searchAsync(session, {
|
||||||
chatHistory: history,
|
chatHistory: history,
|
||||||
config: {
|
config: {
|
||||||
embedding: embeddings,
|
embedding: embeddings,
|
||||||
llm: llm,
|
llm: llm,
|
||||||
sources: ['web', 'discussions', 'academic'],
|
sources: body.sources,
|
||||||
mode: 'balanced',
|
mode: body.optimizationMode,
|
||||||
fileIds: [],
|
fileIds: [],
|
||||||
|
systemInstructions: body.systemInstructions || '',
|
||||||
},
|
},
|
||||||
followUp: body.query,
|
followUp: body.query,
|
||||||
|
chatId: crypto.randomUUID(),
|
||||||
|
messageId: crypto.randomUUID(),
|
||||||
});
|
});
|
||||||
|
|
||||||
if (!body.stream) {
|
if (!body.stream) {
|
||||||
@@ -71,13 +75,13 @@ export const POST = async (req: Request) => {
|
|||||||
let message = '';
|
let message = '';
|
||||||
let sources: any[] = [];
|
let sources: any[] = [];
|
||||||
|
|
||||||
session.addListener('data', (data: string) => {
|
session.subscribe((event: string, data: Record<string, any>) => {
|
||||||
|
if (event === 'data') {
|
||||||
try {
|
try {
|
||||||
const parsedData = JSON.parse(data);
|
if (data.type === 'response') {
|
||||||
if (parsedData.type === 'response') {
|
message += data.data;
|
||||||
message += parsedData.data;
|
} else if (data.type === 'searchResults') {
|
||||||
} else if (parsedData.type === 'sources') {
|
sources = data.data;
|
||||||
sources = parsedData.data;
|
|
||||||
}
|
}
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
reject(
|
reject(
|
||||||
@@ -87,19 +91,20 @@ export const POST = async (req: Request) => {
|
|||||||
),
|
),
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
});
|
}
|
||||||
|
|
||||||
session.addListener('end', () => {
|
if (event === 'end') {
|
||||||
resolve(Response.json({ message, sources }, { status: 200 }));
|
resolve(Response.json({ message, sources }, { status: 200 }));
|
||||||
});
|
}
|
||||||
|
|
||||||
session.addListener('error', (error: any) => {
|
if (event === 'error') {
|
||||||
reject(
|
reject(
|
||||||
Response.json(
|
Response.json(
|
||||||
{ message: 'Search error', error },
|
{ message: 'Search error', error: data },
|
||||||
{ status: 500 },
|
{ status: 500 },
|
||||||
),
|
),
|
||||||
);
|
);
|
||||||
|
}
|
||||||
});
|
});
|
||||||
},
|
},
|
||||||
);
|
);
|
||||||
@@ -131,23 +136,22 @@ export const POST = async (req: Request) => {
|
|||||||
} catch (error) {}
|
} catch (error) {}
|
||||||
});
|
});
|
||||||
|
|
||||||
session.addListener('data', (data: string) => {
|
session.subscribe((event: string, data: Record<string, any>) => {
|
||||||
|
if (event === 'data') {
|
||||||
if (signal.aborted) return;
|
if (signal.aborted) return;
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const parsedData = JSON.parse(data);
|
if (data.type === 'response') {
|
||||||
|
|
||||||
if (parsedData.type === 'response') {
|
|
||||||
controller.enqueue(
|
controller.enqueue(
|
||||||
encoder.encode(
|
encoder.encode(
|
||||||
JSON.stringify({
|
JSON.stringify({
|
||||||
type: 'response',
|
type: 'response',
|
||||||
data: parsedData.data,
|
data: data.data,
|
||||||
}) + '\n',
|
}) + '\n',
|
||||||
),
|
),
|
||||||
);
|
);
|
||||||
} else if (parsedData.type === 'sources') {
|
} else if (data.type === 'searchResults') {
|
||||||
sources = parsedData.data;
|
sources = data.data;
|
||||||
controller.enqueue(
|
controller.enqueue(
|
||||||
encoder.encode(
|
encoder.encode(
|
||||||
JSON.stringify({
|
JSON.stringify({
|
||||||
@@ -160,9 +164,9 @@ export const POST = async (req: Request) => {
|
|||||||
} catch (error) {
|
} catch (error) {
|
||||||
controller.error(error);
|
controller.error(error);
|
||||||
}
|
}
|
||||||
});
|
}
|
||||||
|
|
||||||
session.addListener('end', () => {
|
if (event === 'end') {
|
||||||
if (signal.aborted) return;
|
if (signal.aborted) return;
|
||||||
|
|
||||||
controller.enqueue(
|
controller.enqueue(
|
||||||
@@ -173,12 +177,13 @@ export const POST = async (req: Request) => {
|
|||||||
),
|
),
|
||||||
);
|
);
|
||||||
controller.close();
|
controller.close();
|
||||||
});
|
}
|
||||||
|
|
||||||
session.addListener('error', (error: any) => {
|
if (event === 'error') {
|
||||||
if (signal.aborted) return;
|
if (signal.aborted) return;
|
||||||
|
|
||||||
controller.error(error);
|
controller.error(data);
|
||||||
|
}
|
||||||
});
|
});
|
||||||
},
|
},
|
||||||
cancel() {
|
cancel() {
|
||||||
|
|||||||
@@ -20,7 +20,10 @@ export const POST = async (req: Request) => {
|
|||||||
|
|
||||||
const suggestions = await generateSuggestions(
|
const suggestions = await generateSuggestions(
|
||||||
{
|
{
|
||||||
chatHistory: body.chatHistory,
|
chatHistory: body.chatHistory.map(([role, content]) => ({
|
||||||
|
role: role === 'human' ? 'user' : 'assistant',
|
||||||
|
content,
|
||||||
|
})),
|
||||||
},
|
},
|
||||||
llm,
|
llm,
|
||||||
);
|
);
|
||||||
|
|||||||
@@ -21,7 +21,10 @@ export const POST = async (req: Request) => {
|
|||||||
|
|
||||||
const videos = await handleVideoSearch(
|
const videos = await handleVideoSearch(
|
||||||
{
|
{
|
||||||
chatHistory: body.chatHistory,
|
chatHistory: body.chatHistory.map(([role, content]) => ({
|
||||||
|
role: role === 'human' ? 'user' : 'assistant',
|
||||||
|
content,
|
||||||
|
})),
|
||||||
query: body.query,
|
query: body.query,
|
||||||
},
|
},
|
||||||
llm,
|
llm,
|
||||||
|
|||||||
@@ -80,7 +80,10 @@ const Chat = () => {
|
|||||||
{loading && !messageAppeared && <MessageBoxLoading />}
|
{loading && !messageAppeared && <MessageBoxLoading />}
|
||||||
<div ref={messageEnd} className="h-0" />
|
<div ref={messageEnd} className="h-0" />
|
||||||
{dividerWidth > 0 && (
|
{dividerWidth > 0 && (
|
||||||
<div className="fixed z-40 bottom-24 lg:bottom-6" style={{ width: dividerWidth }}>
|
<div
|
||||||
|
className="fixed z-40 bottom-24 lg:bottom-6"
|
||||||
|
style={{ width: dividerWidth }}
|
||||||
|
>
|
||||||
<div
|
<div
|
||||||
className="pointer-events-none absolute -bottom-6 left-0 right-0 h-[calc(100%+24px+24px)] dark:hidden"
|
className="pointer-events-none absolute -bottom-6 left-0 right-0 h-[calc(100%+24px+24px)] dark:hidden"
|
||||||
style={{
|
style={{
|
||||||
|
|||||||
@@ -21,8 +21,8 @@ const Copy = ({
|
|||||||
) as SourceBlock[];
|
) as SourceBlock[];
|
||||||
|
|
||||||
const contentToCopy = `${initialMessage}${
|
const contentToCopy = `${initialMessage}${
|
||||||
sources.length > 0 &&
|
sources.length > 0
|
||||||
`\n\nCitations:\n${sources
|
? `\n\nCitations:\n${sources
|
||||||
.map((source) => source.data)
|
.map((source) => source.data)
|
||||||
.flat()
|
.flat()
|
||||||
.map(
|
.map(
|
||||||
@@ -30,6 +30,7 @@ const Copy = ({
|
|||||||
`[${i + 1}] ${s.metadata.url.startsWith('file_id://') ? s.metadata.fileName || 'Uploaded File' : s.metadata.url}`,
|
`[${i + 1}] ${s.metadata.url.startsWith('file_id://') ? s.metadata.fileName || 'Uploaded File' : s.metadata.url}`,
|
||||||
)
|
)
|
||||||
.join(`\n`)}`
|
.join(`\n`)}`
|
||||||
|
: ''
|
||||||
}`;
|
}`;
|
||||||
|
|
||||||
navigator.clipboard.writeText(contentToCopy);
|
navigator.clipboard.writeText(contentToCopy);
|
||||||
|
|||||||
@@ -50,7 +50,14 @@ const MessageBox = ({
|
|||||||
dividerRef?: MutableRefObject<HTMLDivElement | null>;
|
dividerRef?: MutableRefObject<HTMLDivElement | null>;
|
||||||
isLast: boolean;
|
isLast: boolean;
|
||||||
}) => {
|
}) => {
|
||||||
const { loading, sendMessage, rewrite, messages, researchEnded } = useChat();
|
const {
|
||||||
|
loading,
|
||||||
|
sendMessage,
|
||||||
|
rewrite,
|
||||||
|
messages,
|
||||||
|
researchEnded,
|
||||||
|
chatHistory,
|
||||||
|
} = useChat();
|
||||||
|
|
||||||
const parsedMessage = section.parsedTextBlocks.join('\n\n');
|
const parsedMessage = section.parsedTextBlocks.join('\n\n');
|
||||||
const speechMessage = section.speechMessage || '';
|
const speechMessage = section.speechMessage || '';
|
||||||
@@ -265,11 +272,11 @@ const MessageBox = ({
|
|||||||
<div className="lg:sticky lg:top-20 flex flex-col items-center space-y-3 w-full lg:w-3/12 z-30 h-full pb-4">
|
<div className="lg:sticky lg:top-20 flex flex-col items-center space-y-3 w-full lg:w-3/12 z-30 h-full pb-4">
|
||||||
<SearchImages
|
<SearchImages
|
||||||
query={section.message.query}
|
query={section.message.query}
|
||||||
chatHistory={messages}
|
chatHistory={chatHistory}
|
||||||
messageId={section.message.messageId}
|
messageId={section.message.messageId}
|
||||||
/>
|
/>
|
||||||
<SearchVideos
|
<SearchVideos
|
||||||
chatHistory={messages}
|
chatHistory={chatHistory}
|
||||||
query={section.message.query}
|
query={section.message.query}
|
||||||
messageId={section.message.messageId}
|
messageId={section.message.messageId}
|
||||||
/>
|
/>
|
||||||
|
|||||||
@@ -205,8 +205,9 @@ const Navbar = () => {
|
|||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
if (sections.length > 0 && sections[0].message) {
|
if (sections.length > 0 && sections[0].message) {
|
||||||
const newTitle =
|
const newTitle =
|
||||||
sections[0].message.query.substring(0, 30) + '...' ||
|
sections[0].message.query.length > 30
|
||||||
'New Conversation';
|
? `${sections[0].message.query.substring(0, 30).trim()}...`
|
||||||
|
: sections[0].message.query || 'New Conversation';
|
||||||
|
|
||||||
setTitle(newTitle);
|
setTitle(newTitle);
|
||||||
const newTimeAgo = formatTimeDifference(
|
const newTimeAgo = formatTimeDifference(
|
||||||
|
|||||||
@@ -17,7 +17,7 @@ const SearchImages = ({
|
|||||||
messageId,
|
messageId,
|
||||||
}: {
|
}: {
|
||||||
query: string;
|
query: string;
|
||||||
chatHistory: Message[];
|
chatHistory: [string, string][];
|
||||||
messageId: string;
|
messageId: string;
|
||||||
}) => {
|
}) => {
|
||||||
const [images, setImages] = useState<Image[] | null>(null);
|
const [images, setImages] = useState<Image[] | null>(null);
|
||||||
|
|||||||
@@ -30,7 +30,7 @@ const Searchvideos = ({
|
|||||||
messageId,
|
messageId,
|
||||||
}: {
|
}: {
|
||||||
query: string;
|
query: string;
|
||||||
chatHistory: Message[];
|
chatHistory: [string, string][];
|
||||||
messageId: string;
|
messageId: string;
|
||||||
}) => {
|
}) => {
|
||||||
const [videos, setVideos] = useState<Video[] | null>(null);
|
const [videos, setVideos] = useState<Video[] | null>(null);
|
||||||
|
|||||||
@@ -91,7 +91,7 @@ const WeatherWidget = () => {
|
|||||||
setData({
|
setData({
|
||||||
temperature: data.temperature,
|
temperature: data.temperature,
|
||||||
condition: data.condition,
|
condition: data.condition,
|
||||||
location: 'Mars',
|
location: location.city,
|
||||||
humidity: data.humidity,
|
humidity: data.humidity,
|
||||||
windSpeed: data.windSpeed,
|
windSpeed: data.windSpeed,
|
||||||
icon: data.icon,
|
icon: data.icon,
|
||||||
|
|||||||
@@ -1,12 +1,4 @@
|
|||||||
export const getSuggestions = async (chatHistory: [string, string][]) => {
|
export const getSuggestions = async (chatHistory: [string, string][]) => {
|
||||||
const chatTurns = chatHistory.map(([role, content]) => {
|
|
||||||
if (role === 'human') {
|
|
||||||
return { role: 'user', content };
|
|
||||||
} else {
|
|
||||||
return { role: 'assistant', content };
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
const chatModel = localStorage.getItem('chatModelKey');
|
const chatModel = localStorage.getItem('chatModelKey');
|
||||||
const chatModelProvider = localStorage.getItem('chatModelProviderId');
|
const chatModelProvider = localStorage.getItem('chatModelProviderId');
|
||||||
|
|
||||||
@@ -16,7 +8,7 @@ export const getSuggestions = async (chatHistory: [string, string][]) => {
|
|||||||
'Content-Type': 'application/json',
|
'Content-Type': 'application/json',
|
||||||
},
|
},
|
||||||
body: JSON.stringify({
|
body: JSON.stringify({
|
||||||
chatHistory: chatTurns,
|
chatHistory,
|
||||||
chatModel: {
|
chatModel: {
|
||||||
providerId: chatModelProvider,
|
providerId: chatModelProvider,
|
||||||
key: chatModel,
|
key: chatModel,
|
||||||
|
|||||||
99
src/lib/agents/search/api.ts
Normal file
99
src/lib/agents/search/api.ts
Normal file
@@ -0,0 +1,99 @@
|
|||||||
|
import { ResearcherOutput, SearchAgentInput } from './types';
|
||||||
|
import SessionManager from '@/lib/session';
|
||||||
|
import { classify } from './classifier';
|
||||||
|
import Researcher from './researcher';
|
||||||
|
import { getWriterPrompt } from '@/lib/prompts/search/writer';
|
||||||
|
import { WidgetExecutor } from './widgets';
|
||||||
|
|
||||||
|
class APISearchAgent {
|
||||||
|
async searchAsync(session: SessionManager, input: SearchAgentInput) {
|
||||||
|
const classification = await classify({
|
||||||
|
chatHistory: input.chatHistory,
|
||||||
|
enabledSources: input.config.sources,
|
||||||
|
query: input.followUp,
|
||||||
|
llm: input.config.llm,
|
||||||
|
});
|
||||||
|
|
||||||
|
const widgetPromise = WidgetExecutor.executeAll({
|
||||||
|
classification,
|
||||||
|
chatHistory: input.chatHistory,
|
||||||
|
followUp: input.followUp,
|
||||||
|
llm: input.config.llm,
|
||||||
|
});
|
||||||
|
|
||||||
|
let searchPromise: Promise<ResearcherOutput> | null = null;
|
||||||
|
|
||||||
|
if (!classification.classification.skipSearch) {
|
||||||
|
const researcher = new Researcher();
|
||||||
|
searchPromise = researcher.research(SessionManager.createSession(), {
|
||||||
|
chatHistory: input.chatHistory,
|
||||||
|
followUp: input.followUp,
|
||||||
|
classification: classification,
|
||||||
|
config: input.config,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
const [widgetOutputs, searchResults] = await Promise.all([
|
||||||
|
widgetPromise,
|
||||||
|
searchPromise,
|
||||||
|
]);
|
||||||
|
|
||||||
|
if (searchResults) {
|
||||||
|
session.emit('data', {
|
||||||
|
type: 'searchResults',
|
||||||
|
data: searchResults.searchFindings,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
session.emit('data', {
|
||||||
|
type: 'researchComplete',
|
||||||
|
});
|
||||||
|
|
||||||
|
const finalContext =
|
||||||
|
searchResults?.searchFindings
|
||||||
|
.map(
|
||||||
|
(f, index) =>
|
||||||
|
`<result index=${index + 1} title=${f.metadata.title}>${f.content}</result>`,
|
||||||
|
)
|
||||||
|
.join('\n') || '';
|
||||||
|
|
||||||
|
const widgetContext = widgetOutputs
|
||||||
|
.map((o) => {
|
||||||
|
return `<result>${o.llmContext}</result>`;
|
||||||
|
})
|
||||||
|
.join('\n-------------\n');
|
||||||
|
|
||||||
|
const finalContextWithWidgets = `<search_results note="These are the search results and assistant can cite these">\n${finalContext}\n</search_results>\n<widgets_result noteForAssistant="Its output is already showed to the user, assistant can use this information to answer the query but do not CITE this as a souce">\n${widgetContext}\n</widgets_result>`;
|
||||||
|
|
||||||
|
const writerPrompt = getWriterPrompt(
|
||||||
|
finalContextWithWidgets,
|
||||||
|
input.config.systemInstructions,
|
||||||
|
input.config.mode,
|
||||||
|
);
|
||||||
|
|
||||||
|
const answerStream = input.config.llm.streamText({
|
||||||
|
messages: [
|
||||||
|
{
|
||||||
|
role: 'system',
|
||||||
|
content: writerPrompt,
|
||||||
|
},
|
||||||
|
...input.chatHistory,
|
||||||
|
{
|
||||||
|
role: 'user',
|
||||||
|
content: input.followUp,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
});
|
||||||
|
|
||||||
|
for await (const chunk of answerStream) {
|
||||||
|
session.emit('data', {
|
||||||
|
type: 'response',
|
||||||
|
data: chunk.contentChunk,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
session.emit('end', {});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
export default APISearchAgent;
|
||||||
@@ -17,7 +17,7 @@ Here are some examples of good plans:
|
|||||||
<examples>
|
<examples>
|
||||||
- "Okay, the user wants to know the latest advancements in renewable energy. I will start by looking for recent articles and studies on this topic, then summarize the key points." -> "I have gathered enough information to provide a comprehensive answer."
|
- "Okay, the user wants to know the latest advancements in renewable energy. I will start by looking for recent articles and studies on this topic, then summarize the key points." -> "I have gathered enough information to provide a comprehensive answer."
|
||||||
- "The user is asking about the health benefits of a Mediterranean diet. I will search for scientific studies and expert opinions on this diet, then compile the findings into a clear summary." -> "I have gathered information about the Mediterranean diet and its health benefits, I will now look up for any recent studies to ensure the information is current."
|
- "The user is asking about the health benefits of a Mediterranean diet. I will search for scientific studies and expert opinions on this diet, then compile the findings into a clear summary." -> "I have gathered information about the Mediterranean diet and its health benefits, I will now look up for any recent studies to ensure the information is current."
|
||||||
<examples>
|
</examples>
|
||||||
|
|
||||||
YOU CAN NEVER CALL ANY OTHER TOOL BEFORE CALLING THIS ONE FIRST, IF YOU DO, THAT CALL WOULD BE IGNORED.
|
YOU CAN NEVER CALL ANY OTHER TOOL BEFORE CALLING THIS ONE FIRST, IF YOU DO, THAT CALL WOULD BE IGNORED.
|
||||||
`;
|
`;
|
||||||
|
|||||||
@@ -51,6 +51,10 @@ const calculationWidget: Widget = {
|
|||||||
schema,
|
schema,
|
||||||
});
|
});
|
||||||
|
|
||||||
|
if (output.notPresent) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
const result = mathEval(output.expression);
|
const result = mathEval(output.expression);
|
||||||
|
|
||||||
return {
|
return {
|
||||||
|
|||||||
@@ -3,7 +3,6 @@ import { suggestionGeneratorPrompt } from '@/lib/prompts/suggestions';
|
|||||||
import { ChatTurnMessage } from '@/lib/types';
|
import { ChatTurnMessage } from '@/lib/types';
|
||||||
import z from 'zod';
|
import z from 'zod';
|
||||||
import BaseLLM from '@/lib/models/base/llm';
|
import BaseLLM from '@/lib/models/base/llm';
|
||||||
import { i } from 'mathjs';
|
|
||||||
|
|
||||||
type SuggestionGeneratorInput = {
|
type SuggestionGeneratorInput = {
|
||||||
chatHistory: ChatTurnMessage[];
|
chatHistory: ChatTurnMessage[];
|
||||||
|
|||||||
@@ -175,7 +175,7 @@ const loadMessages = async (
|
|||||||
chatId: string,
|
chatId: string,
|
||||||
setMessages: (messages: Message[]) => void,
|
setMessages: (messages: Message[]) => void,
|
||||||
setIsMessagesLoaded: (loaded: boolean) => void,
|
setIsMessagesLoaded: (loaded: boolean) => void,
|
||||||
setChatHistory: (history: [string, string][]) => void,
|
chatHistory: React.MutableRefObject<[string, string][]>,
|
||||||
setSources: (sources: string[]) => void,
|
setSources: (sources: string[]) => void,
|
||||||
setNotFound: (notFound: boolean) => void,
|
setNotFound: (notFound: boolean) => void,
|
||||||
setFiles: (files: File[]) => void,
|
setFiles: (files: File[]) => void,
|
||||||
@@ -233,7 +233,7 @@ const loadMessages = async (
|
|||||||
setFiles(files);
|
setFiles(files);
|
||||||
setFileIds(files.map((file: File) => file.fileId));
|
setFileIds(files.map((file: File) => file.fileId));
|
||||||
|
|
||||||
setChatHistory(history);
|
chatHistory.current = history;
|
||||||
setSources(data.chat.sources);
|
setSources(data.chat.sources);
|
||||||
setIsMessagesLoaded(true);
|
setIsMessagesLoaded(true);
|
||||||
};
|
};
|
||||||
@@ -281,7 +281,7 @@ export const ChatProvider = ({ children }: { children: React.ReactNode }) => {
|
|||||||
|
|
||||||
const [researchEnded, setResearchEnded] = useState(false);
|
const [researchEnded, setResearchEnded] = useState(false);
|
||||||
|
|
||||||
const [chatHistory, setChatHistory] = useState<[string, string][]>([]);
|
const chatHistory = useRef<[string, string][]>([]);
|
||||||
const [messages, setMessages] = useState<Message[]>([]);
|
const [messages, setMessages] = useState<Message[]>([]);
|
||||||
|
|
||||||
const [files, setFiles] = useState<File[]>([]);
|
const [files, setFiles] = useState<File[]>([]);
|
||||||
@@ -402,7 +402,12 @@ export const ChatProvider = ({ children }: { children: React.ReactNode }) => {
|
|||||||
});
|
});
|
||||||
}, [messages]);
|
}, [messages]);
|
||||||
|
|
||||||
|
const isReconnectingRef = useRef(false);
|
||||||
|
const handledMessageEndRef = useRef<Set<string>>(new Set());
|
||||||
|
|
||||||
const checkReconnect = async () => {
|
const checkReconnect = async () => {
|
||||||
|
if (isReconnectingRef.current) return;
|
||||||
|
|
||||||
setIsReady(true);
|
setIsReady(true);
|
||||||
console.debug(new Date(), 'app:ready');
|
console.debug(new Date(), 'app:ready');
|
||||||
|
|
||||||
@@ -414,6 +419,8 @@ export const ChatProvider = ({ children }: { children: React.ReactNode }) => {
|
|||||||
setResearchEnded(false);
|
setResearchEnded(false);
|
||||||
setMessageAppeared(false);
|
setMessageAppeared(false);
|
||||||
|
|
||||||
|
isReconnectingRef.current = true;
|
||||||
|
|
||||||
const res = await fetch(`/api/reconnect/${lastMsg.backendId}`, {
|
const res = await fetch(`/api/reconnect/${lastMsg.backendId}`, {
|
||||||
method: 'POST',
|
method: 'POST',
|
||||||
});
|
});
|
||||||
@@ -427,6 +434,7 @@ export const ChatProvider = ({ children }: { children: React.ReactNode }) => {
|
|||||||
|
|
||||||
const messageHandler = getMessageHandler(lastMsg);
|
const messageHandler = getMessageHandler(lastMsg);
|
||||||
|
|
||||||
|
try {
|
||||||
while (true) {
|
while (true) {
|
||||||
const { value, done } = await reader.read();
|
const { value, done } = await reader.read();
|
||||||
if (done) break;
|
if (done) break;
|
||||||
@@ -445,6 +453,9 @@ export const ChatProvider = ({ children }: { children: React.ReactNode }) => {
|
|||||||
console.warn('Incomplete JSON, waiting for next chunk...');
|
console.warn('Incomplete JSON, waiting for next chunk...');
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
} finally {
|
||||||
|
isReconnectingRef.current = false;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
@@ -463,7 +474,7 @@ export const ChatProvider = ({ children }: { children: React.ReactNode }) => {
|
|||||||
if (params.chatId && params.chatId !== chatId) {
|
if (params.chatId && params.chatId !== chatId) {
|
||||||
setChatId(params.chatId);
|
setChatId(params.chatId);
|
||||||
setMessages([]);
|
setMessages([]);
|
||||||
setChatHistory([]);
|
chatHistory.current = [];
|
||||||
setFiles([]);
|
setFiles([]);
|
||||||
setFileIds([]);
|
setFileIds([]);
|
||||||
setIsMessagesLoaded(false);
|
setIsMessagesLoaded(false);
|
||||||
@@ -483,7 +494,7 @@ export const ChatProvider = ({ children }: { children: React.ReactNode }) => {
|
|||||||
chatId,
|
chatId,
|
||||||
setMessages,
|
setMessages,
|
||||||
setIsMessagesLoaded,
|
setIsMessagesLoaded,
|
||||||
setChatHistory,
|
chatHistory,
|
||||||
setSources,
|
setSources,
|
||||||
setNotFound,
|
setNotFound,
|
||||||
setFiles,
|
setFiles,
|
||||||
@@ -519,9 +530,7 @@ export const ChatProvider = ({ children }: { children: React.ReactNode }) => {
|
|||||||
|
|
||||||
setMessages((prev) => prev.slice(0, index));
|
setMessages((prev) => prev.slice(0, index));
|
||||||
|
|
||||||
setChatHistory((prev) => {
|
chatHistory.current = chatHistory.current.slice(0, index * 2);
|
||||||
return prev.slice(0, index * 2);
|
|
||||||
});
|
|
||||||
|
|
||||||
const messageToRewrite = messages[index];
|
const messageToRewrite = messages[index];
|
||||||
sendMessage(messageToRewrite.query, messageToRewrite.messageId, true);
|
sendMessage(messageToRewrite.query, messageToRewrite.messageId, true);
|
||||||
@@ -570,6 +579,20 @@ export const ChatProvider = ({ children }: { children: React.ReactNode }) => {
|
|||||||
setMessages((prev) =>
|
setMessages((prev) =>
|
||||||
prev.map((msg) => {
|
prev.map((msg) => {
|
||||||
if (msg.messageId === messageId) {
|
if (msg.messageId === messageId) {
|
||||||
|
const exists = msg.responseBlocks.findIndex(
|
||||||
|
(b) => b.id === data.block.id,
|
||||||
|
);
|
||||||
|
|
||||||
|
if (exists !== -1) {
|
||||||
|
const existingBlocks = [...msg.responseBlocks];
|
||||||
|
existingBlocks[exists] = data.block;
|
||||||
|
|
||||||
|
return {
|
||||||
|
...msg,
|
||||||
|
responseBlocks: existingBlocks,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
return {
|
return {
|
||||||
...msg,
|
...msg,
|
||||||
responseBlocks: [...msg.responseBlocks, data.block],
|
responseBlocks: [...msg.responseBlocks, data.block],
|
||||||
@@ -607,12 +630,18 @@ export const ChatProvider = ({ children }: { children: React.ReactNode }) => {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if (data.type === 'messageEnd') {
|
if (data.type === 'messageEnd') {
|
||||||
|
if (handledMessageEndRef.current.has(messageId)) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
handledMessageEndRef.current.add(messageId);
|
||||||
|
|
||||||
const currentMsg = messagesRef.current.find(
|
const currentMsg = messagesRef.current.find(
|
||||||
(msg) => msg.messageId === messageId,
|
(msg) => msg.messageId === messageId,
|
||||||
);
|
);
|
||||||
|
|
||||||
const newHistory: [string, string][] = [
|
const newHistory: [string, string][] = [
|
||||||
...chatHistory,
|
...chatHistory.current,
|
||||||
['human', message.query],
|
['human', message.query],
|
||||||
[
|
[
|
||||||
'assistant',
|
'assistant',
|
||||||
@@ -621,7 +650,7 @@ export const ChatProvider = ({ children }: { children: React.ReactNode }) => {
|
|||||||
],
|
],
|
||||||
];
|
];
|
||||||
|
|
||||||
setChatHistory(newHistory);
|
chatHistory.current = newHistory;
|
||||||
|
|
||||||
setMessages((prev) =>
|
setMessages((prev) =>
|
||||||
prev.map((msg) =>
|
prev.map((msg) =>
|
||||||
@@ -638,6 +667,7 @@ export const ChatProvider = ({ children }: { children: React.ReactNode }) => {
|
|||||||
const autoMediaSearch = getAutoMediaSearch();
|
const autoMediaSearch = getAutoMediaSearch();
|
||||||
|
|
||||||
if (autoMediaSearch) {
|
if (autoMediaSearch) {
|
||||||
|
setTimeout(() => {
|
||||||
document
|
document
|
||||||
.getElementById(`search-images-${lastMsg.messageId}`)
|
.getElementById(`search-images-${lastMsg.messageId}`)
|
||||||
?.click();
|
?.click();
|
||||||
@@ -645,6 +675,7 @@ export const ChatProvider = ({ children }: { children: React.ReactNode }) => {
|
|||||||
document
|
document
|
||||||
.getElementById(`search-videos-${lastMsg.messageId}`)
|
.getElementById(`search-videos-${lastMsg.messageId}`)
|
||||||
?.click();
|
?.click();
|
||||||
|
}, 200);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Check if there are sources and no suggestions
|
// Check if there are sources and no suggestions
|
||||||
@@ -728,8 +759,11 @@ export const ChatProvider = ({ children }: { children: React.ReactNode }) => {
|
|||||||
sources: sources,
|
sources: sources,
|
||||||
optimizationMode: optimizationMode,
|
optimizationMode: optimizationMode,
|
||||||
history: rewrite
|
history: rewrite
|
||||||
? chatHistory.slice(0, messageIndex === -1 ? undefined : messageIndex)
|
? chatHistory.current.slice(
|
||||||
: chatHistory,
|
0,
|
||||||
|
messageIndex === -1 ? undefined : messageIndex,
|
||||||
|
)
|
||||||
|
: chatHistory.current,
|
||||||
chatModel: {
|
chatModel: {
|
||||||
key: chatModelProvider.key,
|
key: chatModelProvider.key,
|
||||||
providerId: chatModelProvider.providerId,
|
providerId: chatModelProvider.providerId,
|
||||||
@@ -776,7 +810,7 @@ export const ChatProvider = ({ children }: { children: React.ReactNode }) => {
|
|||||||
value={{
|
value={{
|
||||||
messages,
|
messages,
|
||||||
sections,
|
sections,
|
||||||
chatHistory,
|
chatHistory: chatHistory.current,
|
||||||
files,
|
files,
|
||||||
fileIds,
|
fileIds,
|
||||||
sources,
|
sources,
|
||||||
|
|||||||
@@ -7,6 +7,7 @@ import TransformersProvider from './transformers';
|
|||||||
import GroqProvider from './groq';
|
import GroqProvider from './groq';
|
||||||
import LemonadeProvider from './lemonade';
|
import LemonadeProvider from './lemonade';
|
||||||
import AnthropicProvider from './anthropic';
|
import AnthropicProvider from './anthropic';
|
||||||
|
import LMStudioProvider from './lmstudio';
|
||||||
|
|
||||||
export const providers: Record<string, ProviderConstructor<any>> = {
|
export const providers: Record<string, ProviderConstructor<any>> = {
|
||||||
openai: OpenAIProvider,
|
openai: OpenAIProvider,
|
||||||
@@ -16,6 +17,7 @@ export const providers: Record<string, ProviderConstructor<any>> = {
|
|||||||
groq: GroqProvider,
|
groq: GroqProvider,
|
||||||
lemonade: LemonadeProvider,
|
lemonade: LemonadeProvider,
|
||||||
anthropic: AnthropicProvider,
|
anthropic: AnthropicProvider,
|
||||||
|
lmstudio: LMStudioProvider,
|
||||||
};
|
};
|
||||||
|
|
||||||
export const getModelProvidersUIConfigSection =
|
export const getModelProvidersUIConfigSection =
|
||||||
|
|||||||
143
src/lib/models/providers/lmstudio/index.ts
Normal file
143
src/lib/models/providers/lmstudio/index.ts
Normal file
@@ -0,0 +1,143 @@
|
|||||||
|
import { UIConfigField } from '@/lib/config/types';
|
||||||
|
import { getConfiguredModelProviderById } from '@/lib/config/serverRegistry';
|
||||||
|
import BaseModelProvider from '../../base/provider';
|
||||||
|
import { Model, ModelList, ProviderMetadata } from '../../types';
|
||||||
|
import LMStudioLLM from './lmstudioLLM';
|
||||||
|
import BaseLLM from '../../base/llm';
|
||||||
|
import BaseEmbedding from '../../base/embedding';
|
||||||
|
import LMStudioEmbedding from './lmstudioEmbedding';
|
||||||
|
|
||||||
|
interface LMStudioConfig {
|
||||||
|
baseURL: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
const providerConfigFields: UIConfigField[] = [
|
||||||
|
{
|
||||||
|
type: 'string',
|
||||||
|
name: 'Base URL',
|
||||||
|
key: 'baseURL',
|
||||||
|
description: 'The base URL for LM Studio server',
|
||||||
|
required: true,
|
||||||
|
placeholder: 'http://localhost:1234',
|
||||||
|
env: 'LM_STUDIO_BASE_URL',
|
||||||
|
scope: 'server',
|
||||||
|
},
|
||||||
|
];
|
||||||
|
|
||||||
|
class LMStudioProvider extends BaseModelProvider<LMStudioConfig> {
|
||||||
|
constructor(id: string, name: string, config: LMStudioConfig) {
|
||||||
|
super(id, name, config);
|
||||||
|
}
|
||||||
|
|
||||||
|
private normalizeBaseURL(url: string): string {
|
||||||
|
const trimmed = url.trim().replace(/\/+$/, '');
|
||||||
|
return trimmed.endsWith('/v1') ? trimmed : `${trimmed}/v1`;
|
||||||
|
}
|
||||||
|
|
||||||
|
async getDefaultModels(): Promise<ModelList> {
|
||||||
|
try {
|
||||||
|
const baseURL = this.normalizeBaseURL(this.config.baseURL);
|
||||||
|
|
||||||
|
const res = await fetch(`${baseURL}/models`, {
|
||||||
|
method: 'GET',
|
||||||
|
headers: {
|
||||||
|
'Content-Type': 'application/json',
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
|
const data = await res.json();
|
||||||
|
|
||||||
|
const models: Model[] = data.data.map((m: any) => {
|
||||||
|
return {
|
||||||
|
name: m.id,
|
||||||
|
key: m.id,
|
||||||
|
};
|
||||||
|
});
|
||||||
|
|
||||||
|
return {
|
||||||
|
embedding: models,
|
||||||
|
chat: models,
|
||||||
|
};
|
||||||
|
} catch (err) {
|
||||||
|
if (err instanceof TypeError) {
|
||||||
|
throw new Error(
|
||||||
|
'Error connecting to LM Studio. Please ensure the base URL is correct and the LM Studio server is running.',
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
throw err;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async getModelList(): Promise<ModelList> {
|
||||||
|
const defaultModels = await this.getDefaultModels();
|
||||||
|
const configProvider = getConfiguredModelProviderById(this.id)!;
|
||||||
|
|
||||||
|
return {
|
||||||
|
embedding: [
|
||||||
|
...defaultModels.embedding,
|
||||||
|
...configProvider.embeddingModels,
|
||||||
|
],
|
||||||
|
chat: [...defaultModels.chat, ...configProvider.chatModels],
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
async loadChatModel(key: string): Promise<BaseLLM<any>> {
|
||||||
|
const modelList = await this.getModelList();
|
||||||
|
|
||||||
|
const exists = modelList.chat.find((m) => m.key === key);
|
||||||
|
|
||||||
|
if (!exists) {
|
||||||
|
throw new Error(
|
||||||
|
'Error Loading LM Studio Chat Model. Invalid Model Selected',
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
return new LMStudioLLM({
|
||||||
|
apiKey: 'lm-studio',
|
||||||
|
model: key,
|
||||||
|
baseURL: this.normalizeBaseURL(this.config.baseURL),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
async loadEmbeddingModel(key: string): Promise<BaseEmbedding<any>> {
|
||||||
|
const modelList = await this.getModelList();
|
||||||
|
const exists = modelList.embedding.find((m) => m.key === key);
|
||||||
|
|
||||||
|
if (!exists) {
|
||||||
|
throw new Error(
|
||||||
|
'Error Loading LM Studio Embedding Model. Invalid Model Selected.',
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
return new LMStudioEmbedding({
|
||||||
|
apiKey: 'lm-studio',
|
||||||
|
model: key,
|
||||||
|
baseURL: this.normalizeBaseURL(this.config.baseURL),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
static parseAndValidate(raw: any): LMStudioConfig {
|
||||||
|
if (!raw || typeof raw !== 'object')
|
||||||
|
throw new Error('Invalid config provided. Expected object');
|
||||||
|
if (!raw.baseURL)
|
||||||
|
throw new Error('Invalid config provided. Base URL must be provided');
|
||||||
|
|
||||||
|
return {
|
||||||
|
baseURL: String(raw.baseURL),
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
static getProviderConfigFields(): UIConfigField[] {
|
||||||
|
return providerConfigFields;
|
||||||
|
}
|
||||||
|
|
||||||
|
static getProviderMetadata(): ProviderMetadata {
|
||||||
|
return {
|
||||||
|
key: 'lmstudio',
|
||||||
|
name: 'LM Studio',
|
||||||
|
};
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
export default LMStudioProvider;
|
||||||
5
src/lib/models/providers/lmstudio/lmstudioEmbedding.ts
Normal file
5
src/lib/models/providers/lmstudio/lmstudioEmbedding.ts
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
import OpenAIEmbedding from '../openai/openaiEmbedding';
|
||||||
|
|
||||||
|
class LMStudioEmbedding extends OpenAIEmbedding {}
|
||||||
|
|
||||||
|
export default LMStudioEmbedding;
|
||||||
5
src/lib/models/providers/lmstudio/lmstudioLLM.ts
Normal file
5
src/lib/models/providers/lmstudio/lmstudioLLM.ts
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
import OpenAILLM from '../openai/openaiLLM';
|
||||||
|
|
||||||
|
class LMStudioLLM extends OpenAILLM {}
|
||||||
|
|
||||||
|
export default LMStudioLLM;
|
||||||
@@ -11,6 +11,7 @@ import { Ollama, Tool as OllamaTool, Message as OllamaMessage } from 'ollama';
|
|||||||
import { parse } from 'partial-json';
|
import { parse } from 'partial-json';
|
||||||
import crypto from 'crypto';
|
import crypto from 'crypto';
|
||||||
import { Message } from '@/lib/types';
|
import { Message } from '@/lib/types';
|
||||||
|
import { repairJson } from '@toolsycc/json-repair';
|
||||||
|
|
||||||
type OllamaConfig = {
|
type OllamaConfig = {
|
||||||
baseURL: string;
|
baseURL: string;
|
||||||
@@ -205,7 +206,13 @@ class OllamaLLM extends BaseLLM<OllamaConfig> {
|
|||||||
});
|
});
|
||||||
|
|
||||||
try {
|
try {
|
||||||
return input.schema.parse(JSON.parse(response.message.content)) as T;
|
return input.schema.parse(
|
||||||
|
JSON.parse(
|
||||||
|
repairJson(response.message.content, {
|
||||||
|
extractJson: true,
|
||||||
|
}) as string,
|
||||||
|
),
|
||||||
|
) as T;
|
||||||
} catch (err) {
|
} catch (err) {
|
||||||
throw new Error(`Error parsing response from Ollama: ${err}`);
|
throw new Error(`Error parsing response from Ollama: ${err}`);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -61,6 +61,22 @@ const defaultChatModels: Model[] = [
|
|||||||
name: 'GPT 5 Mini',
|
name: 'GPT 5 Mini',
|
||||||
key: 'gpt-5-mini',
|
key: 'gpt-5-mini',
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
name: 'GPT 5 Pro',
|
||||||
|
key: 'gpt-5-pro',
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: 'GPT 5.1',
|
||||||
|
key: 'gpt-5.1',
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: 'GPT 5.2',
|
||||||
|
key: 'gpt-5.2',
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: 'GPT 5.2 Pro',
|
||||||
|
key: 'gpt-5.2-pro',
|
||||||
|
},
|
||||||
{
|
{
|
||||||
name: 'o1',
|
name: 'o1',
|
||||||
key: 'o1',
|
key: 'o1',
|
||||||
|
|||||||
@@ -18,6 +18,7 @@ import {
|
|||||||
ChatCompletionToolMessageParam,
|
ChatCompletionToolMessageParam,
|
||||||
} from 'openai/resources/index.mjs';
|
} from 'openai/resources/index.mjs';
|
||||||
import { Message } from '@/lib/types';
|
import { Message } from '@/lib/types';
|
||||||
|
import { repairJson } from '@toolsycc/json-repair';
|
||||||
|
|
||||||
type OpenAIConfig = {
|
type OpenAIConfig = {
|
||||||
apiKey: string;
|
apiKey: string;
|
||||||
@@ -167,7 +168,7 @@ class OpenAILLM extends BaseLLM<OpenAIConfig> {
|
|||||||
contentChunk: chunk.choices[0].delta.content || '',
|
contentChunk: chunk.choices[0].delta.content || '',
|
||||||
toolCallChunk:
|
toolCallChunk:
|
||||||
toolCalls?.map((tc) => {
|
toolCalls?.map((tc) => {
|
||||||
if (tc.type === 'function') {
|
if (!recievedToolCalls[tc.index]) {
|
||||||
const call = {
|
const call = {
|
||||||
name: tc.function?.name!,
|
name: tc.function?.name!,
|
||||||
id: tc.id!,
|
id: tc.id!,
|
||||||
@@ -213,7 +214,13 @@ class OpenAILLM extends BaseLLM<OpenAIConfig> {
|
|||||||
|
|
||||||
if (response.choices && response.choices.length > 0) {
|
if (response.choices && response.choices.length > 0) {
|
||||||
try {
|
try {
|
||||||
return input.schema.parse(response.choices[0].message.parsed) as T;
|
return input.schema.parse(
|
||||||
|
JSON.parse(
|
||||||
|
repairJson(response.choices[0].message.content!, {
|
||||||
|
extractJson: true,
|
||||||
|
}) as string,
|
||||||
|
),
|
||||||
|
) as T;
|
||||||
} catch (err) {
|
} catch (err) {
|
||||||
throw new Error(`Error parsing response from OpenAI: ${err}`);
|
throw new Error(`Error parsing response from OpenAI: ${err}`);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
import { Chunk } from '@/lib/types';
|
import { Chunk } from '@/lib/types';
|
||||||
import BaseEmbedding from '../../base/embedding';
|
import BaseEmbedding from '../../base/embedding';
|
||||||
import { FeatureExtractionPipeline, pipeline } from '@huggingface/transformers';
|
import { FeatureExtractionPipeline } from '@huggingface/transformers';
|
||||||
|
|
||||||
type TransformerConfig = {
|
type TransformerConfig = {
|
||||||
model: string;
|
model: string;
|
||||||
@@ -21,21 +21,19 @@ class TransformerEmbedding extends BaseEmbedding<TransformerConfig> {
|
|||||||
return this.embed(chunks.map((c) => c.content));
|
return this.embed(chunks.map((c) => c.content));
|
||||||
}
|
}
|
||||||
|
|
||||||
async embed(texts: string[]): Promise<number[][]> {
|
private async embed(texts: string[]) {
|
||||||
if (!this.pipelinePromise) {
|
if (!this.pipelinePromise) {
|
||||||
this.pipelinePromise = (async () => {
|
this.pipelinePromise = (async () => {
|
||||||
const transformers = await import('@huggingface/transformers');
|
const { pipeline } = await import('@huggingface/transformers');
|
||||||
return (await transformers.pipeline(
|
const result = await pipeline('feature-extraction', this.config.model, {
|
||||||
'feature-extraction',
|
dtype: 'fp32',
|
||||||
this.config.model,
|
});
|
||||||
)) as unknown as FeatureExtractionPipeline;
|
return result as FeatureExtractionPipeline;
|
||||||
})();
|
})();
|
||||||
}
|
}
|
||||||
|
|
||||||
const pipeline = await this.pipelinePromise;
|
const pipe = await this.pipelinePromise;
|
||||||
|
const output = await pipe(texts, { pooling: 'mean', normalize: true });
|
||||||
const output = await pipeline(texts, { pooling: 'mean', normalize: true });
|
|
||||||
|
|
||||||
return output.tolist() as number[][];
|
return output.tolist() as number[][];
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -3,6 +3,7 @@ import { ChatTurnMessage } from '@/lib/types';
|
|||||||
export const imageSearchPrompt = `
|
export const imageSearchPrompt = `
|
||||||
You will be given a conversation below and a follow up question. You need to rephrase the follow-up question so it is a standalone question that can be used by the LLM to search the web for images.
|
You will be given a conversation below and a follow up question. You need to rephrase the follow-up question so it is a standalone question that can be used by the LLM to search the web for images.
|
||||||
You need to make sure the rephrased question agrees with the conversation and is relevant to the conversation.
|
You need to make sure the rephrased question agrees with the conversation and is relevant to the conversation.
|
||||||
|
Make sure to make the querey standalone and not something very broad, use context from the answers in the conversation to make it specific so user can get best image search results.
|
||||||
Output only the rephrased query in query key JSON format. Do not include any explanation or additional text.
|
Output only the rephrased query in query key JSON format. Do not include any explanation or additional text.
|
||||||
`;
|
`;
|
||||||
|
|
||||||
|
|||||||
@@ -3,6 +3,7 @@ import { ChatTurnMessage } from '@/lib/types';
|
|||||||
export const videoSearchPrompt = `
|
export const videoSearchPrompt = `
|
||||||
You will be given a conversation below and a follow up question. You need to rephrase the follow-up question so it is a standalone question that can be used by the LLM to search Youtube for videos.
|
You will be given a conversation below and a follow up question. You need to rephrase the follow-up question so it is a standalone question that can be used by the LLM to search Youtube for videos.
|
||||||
You need to make sure the rephrased question agrees with the conversation and is relevant to the conversation.
|
You need to make sure the rephrased question agrees with the conversation and is relevant to the conversation.
|
||||||
|
Make sure to make the querey standalone and not something very broad, use context from the answers in the conversation to make it specific so user can get best video search results.
|
||||||
Output only the rephrased query in query key JSON format. Do not include any explanation or additional text.
|
Output only the rephrased query in query key JSON format. Do not include any explanation or additional text.
|
||||||
`;
|
`;
|
||||||
|
|
||||||
|
|||||||
@@ -1,4 +1,3 @@
|
|||||||
import axios from 'axios';
|
|
||||||
import { getSearxngURL } from './config/serverRegistry';
|
import { getSearxngURL } from './config/serverRegistry';
|
||||||
|
|
||||||
interface SearxngSearchOptions {
|
interface SearxngSearchOptions {
|
||||||
|
|||||||
@@ -4,8 +4,8 @@ import crypto from "crypto"
|
|||||||
import fs from 'fs';
|
import fs from 'fs';
|
||||||
import { splitText } from "../utils/splitText";
|
import { splitText } from "../utils/splitText";
|
||||||
import { PDFParse } from 'pdf-parse';
|
import { PDFParse } from 'pdf-parse';
|
||||||
|
import { CanvasFactory } from 'pdf-parse/worker';
|
||||||
import officeParser from 'officeparser'
|
import officeParser from 'officeparser'
|
||||||
import { Chunk } from "../types";
|
|
||||||
|
|
||||||
const supportedMimeTypes = ['application/pdf', 'application/vnd.openxmlformats-officedocument.wordprocessingml.document', 'text/plain'] as const
|
const supportedMimeTypes = ['application/pdf', 'application/vnd.openxmlformats-officedocument.wordprocessingml.document', 'text/plain'] as const
|
||||||
|
|
||||||
@@ -116,7 +116,8 @@ class UploadManager {
|
|||||||
const pdfBuffer = fs.readFileSync(filePath);
|
const pdfBuffer = fs.readFileSync(filePath);
|
||||||
|
|
||||||
const parser = new PDFParse({
|
const parser = new PDFParse({
|
||||||
data: pdfBuffer
|
data: pdfBuffer,
|
||||||
|
CanvasFactory
|
||||||
})
|
})
|
||||||
|
|
||||||
const pdfText = await parser.getText().then(res => res.text)
|
const pdfText = await parser.getText().then(res => res.text)
|
||||||
|
|||||||
77
yarn.lock
77
yarn.lock
@@ -797,6 +797,11 @@
|
|||||||
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-android-arm64/-/canvas-android-arm64-0.1.84.tgz#7b476e3003be0aca08ab27962fd0d6e803939bec"
|
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-android-arm64/-/canvas-android-arm64-0.1.84.tgz#7b476e3003be0aca08ab27962fd0d6e803939bec"
|
||||||
integrity sha512-pdvuqvj3qtwVryqgpAGornJLV6Ezpk39V6wT4JCnRVGy8I3Tk1au8qOalFGrx/r0Ig87hWslysPpHBxVpBMIww==
|
integrity sha512-pdvuqvj3qtwVryqgpAGornJLV6Ezpk39V6wT4JCnRVGy8I3Tk1au8qOalFGrx/r0Ig87hWslysPpHBxVpBMIww==
|
||||||
|
|
||||||
|
"@napi-rs/canvas-android-arm64@0.1.87":
|
||||||
|
version "0.1.87"
|
||||||
|
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-android-arm64/-/canvas-android-arm64-0.1.87.tgz#6adce7741baa56e75dcf72076e4bf249f9bc4b8e"
|
||||||
|
integrity sha512-uW7NxJXPvZft9fers4oBhdCsBRVe77DLQS3eXEOxndFzGKiwmjIbZpQqj4QPvrg3I0FM3UfHatz1+17P5SeCOQ==
|
||||||
|
|
||||||
"@napi-rs/canvas-darwin-arm64@0.1.80":
|
"@napi-rs/canvas-darwin-arm64@0.1.80":
|
||||||
version "0.1.80"
|
version "0.1.80"
|
||||||
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-darwin-arm64/-/canvas-darwin-arm64-0.1.80.tgz#638eaa2d0a2a373c7d15748743182718dcd95c4b"
|
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-darwin-arm64/-/canvas-darwin-arm64-0.1.80.tgz#638eaa2d0a2a373c7d15748743182718dcd95c4b"
|
||||||
@@ -807,6 +812,11 @@
|
|||||||
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-darwin-arm64/-/canvas-darwin-arm64-0.1.84.tgz#0f131722f9f66316cea5f5ed7cfb9ad1290683cd"
|
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-darwin-arm64/-/canvas-darwin-arm64-0.1.84.tgz#0f131722f9f66316cea5f5ed7cfb9ad1290683cd"
|
||||||
integrity sha512-A8IND3Hnv0R6abc6qCcCaOCujTLMmGxtucMTZ5vbQUrEN/scxi378MyTLtyWg+MRr6bwQJ6v/orqMS9datIcww==
|
integrity sha512-A8IND3Hnv0R6abc6qCcCaOCujTLMmGxtucMTZ5vbQUrEN/scxi378MyTLtyWg+MRr6bwQJ6v/orqMS9datIcww==
|
||||||
|
|
||||||
|
"@napi-rs/canvas-darwin-arm64@0.1.87":
|
||||||
|
version "0.1.87"
|
||||||
|
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-darwin-arm64/-/canvas-darwin-arm64-0.1.87.tgz#54f82dc0cd032f85e770abcddbeafc855005931a"
|
||||||
|
integrity sha512-S6YbpXwajDKLTsYftEqR+Ne1lHpeC78okI3IqctVdFexN31Taprn6mdV4CkPY/4S8eGNuReBHvXNyWbGqBZ1eQ==
|
||||||
|
|
||||||
"@napi-rs/canvas-darwin-x64@0.1.80":
|
"@napi-rs/canvas-darwin-x64@0.1.80":
|
||||||
version "0.1.80"
|
version "0.1.80"
|
||||||
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-darwin-x64/-/canvas-darwin-x64-0.1.80.tgz#bd6bc048dbd4b02b9620d9d07117ed93e6970978"
|
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-darwin-x64/-/canvas-darwin-x64-0.1.80.tgz#bd6bc048dbd4b02b9620d9d07117ed93e6970978"
|
||||||
@@ -817,6 +827,11 @@
|
|||||||
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-darwin-x64/-/canvas-darwin-x64-0.1.84.tgz#e6ab8c534172d8a8d434fa090da8a205359d8769"
|
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-darwin-x64/-/canvas-darwin-x64-0.1.84.tgz#e6ab8c534172d8a8d434fa090da8a205359d8769"
|
||||||
integrity sha512-AUW45lJhYWwnA74LaNeqhvqYKK/2hNnBBBl03KRdqeCD4tKneUSrxUqIv8d22CBweOvrAASyKN3W87WO2zEr/A==
|
integrity sha512-AUW45lJhYWwnA74LaNeqhvqYKK/2hNnBBBl03KRdqeCD4tKneUSrxUqIv8d22CBweOvrAASyKN3W87WO2zEr/A==
|
||||||
|
|
||||||
|
"@napi-rs/canvas-darwin-x64@0.1.87":
|
||||||
|
version "0.1.87"
|
||||||
|
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-darwin-x64/-/canvas-darwin-x64-0.1.87.tgz#54c2be73ce69a65e70f3d94fb879907479cc12c5"
|
||||||
|
integrity sha512-OJLwP2WIUmRSqWTyV/NZ2TnvBzUsbNqQu6IL7oshwfxYg4BELPV279wrfQ/xZFqzr7wybfIzKaPF4du5ZdA2Cg==
|
||||||
|
|
||||||
"@napi-rs/canvas-linux-arm-gnueabihf@0.1.80":
|
"@napi-rs/canvas-linux-arm-gnueabihf@0.1.80":
|
||||||
version "0.1.80"
|
version "0.1.80"
|
||||||
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-linux-arm-gnueabihf/-/canvas-linux-arm-gnueabihf-0.1.80.tgz#ce6bfbeb19d9234c42df5c384e5989aa7d734789"
|
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-linux-arm-gnueabihf/-/canvas-linux-arm-gnueabihf-0.1.80.tgz#ce6bfbeb19d9234c42df5c384e5989aa7d734789"
|
||||||
@@ -827,6 +842,11 @@
|
|||||||
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-linux-arm-gnueabihf/-/canvas-linux-arm-gnueabihf-0.1.84.tgz#5898daa3050a8ba4619c1d6cea3a3217d46c5ffd"
|
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-linux-arm-gnueabihf/-/canvas-linux-arm-gnueabihf-0.1.84.tgz#5898daa3050a8ba4619c1d6cea3a3217d46c5ffd"
|
||||||
integrity sha512-8zs5ZqOrdgs4FioTxSBrkl/wHZB56bJNBqaIsfPL4ZkEQCinOkrFF7xIcXiHiKp93J3wUtbIzeVrhTIaWwqk+A==
|
integrity sha512-8zs5ZqOrdgs4FioTxSBrkl/wHZB56bJNBqaIsfPL4ZkEQCinOkrFF7xIcXiHiKp93J3wUtbIzeVrhTIaWwqk+A==
|
||||||
|
|
||||||
|
"@napi-rs/canvas-linux-arm-gnueabihf@0.1.87":
|
||||||
|
version "0.1.87"
|
||||||
|
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-linux-arm-gnueabihf/-/canvas-linux-arm-gnueabihf-0.1.87.tgz#1bc3c9280db381cc3893c3d13d7320666ce47ebe"
|
||||||
|
integrity sha512-Io3tY6ogc+oyvIGK9rQlnfH4gKiS35P7W6s22x3WCrLFR0dXzZP2IBBoEFEHd6FY6FR1ky5u9cRmADaiLRdX3g==
|
||||||
|
|
||||||
"@napi-rs/canvas-linux-arm64-gnu@0.1.80":
|
"@napi-rs/canvas-linux-arm64-gnu@0.1.80":
|
||||||
version "0.1.80"
|
version "0.1.80"
|
||||||
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-linux-arm64-gnu/-/canvas-linux-arm64-gnu-0.1.80.tgz#3b7a7832fef763826fa5fb740d5757204e52607d"
|
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-linux-arm64-gnu/-/canvas-linux-arm64-gnu-0.1.80.tgz#3b7a7832fef763826fa5fb740d5757204e52607d"
|
||||||
@@ -837,6 +857,11 @@
|
|||||||
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-linux-arm64-gnu/-/canvas-linux-arm64-gnu-0.1.84.tgz#fbbde94c04278259f1f40b4c199dfd9f95c82e66"
|
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-linux-arm64-gnu/-/canvas-linux-arm64-gnu-0.1.84.tgz#fbbde94c04278259f1f40b4c199dfd9f95c82e66"
|
||||||
integrity sha512-i204vtowOglJUpbAFWU5mqsJgH0lVpNk/Ml4mQtB4Lndd86oF+Otr6Mr5KQnZHqYGhlSIKiU2SYnUbhO28zGQA==
|
integrity sha512-i204vtowOglJUpbAFWU5mqsJgH0lVpNk/Ml4mQtB4Lndd86oF+Otr6Mr5KQnZHqYGhlSIKiU2SYnUbhO28zGQA==
|
||||||
|
|
||||||
|
"@napi-rs/canvas-linux-arm64-gnu@0.1.87":
|
||||||
|
version "0.1.87"
|
||||||
|
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-linux-arm64-gnu/-/canvas-linux-arm64-gnu-0.1.87.tgz#70af2d77d58d65559a43c55f6c37906c220af39e"
|
||||||
|
integrity sha512-Zq7h/PQzs37gaSR/gNRZOAaCC1kGt6NmDjA1PcqpONITh/rAfAwAeP98emrbBJ4FDoPkYRkxmxHlmXNLlsQIBw==
|
||||||
|
|
||||||
"@napi-rs/canvas-linux-arm64-musl@0.1.80":
|
"@napi-rs/canvas-linux-arm64-musl@0.1.80":
|
||||||
version "0.1.80"
|
version "0.1.80"
|
||||||
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-linux-arm64-musl/-/canvas-linux-arm64-musl-0.1.80.tgz#d8ccd91f31d70760628623cd575134ada17690a3"
|
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-linux-arm64-musl/-/canvas-linux-arm64-musl-0.1.80.tgz#d8ccd91f31d70760628623cd575134ada17690a3"
|
||||||
@@ -847,6 +872,11 @@
|
|||||||
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-linux-arm64-musl/-/canvas-linux-arm64-musl-0.1.84.tgz#8b02c46c5dbb0a58de87885c61ca1681b1199697"
|
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-linux-arm64-musl/-/canvas-linux-arm64-musl-0.1.84.tgz#8b02c46c5dbb0a58de87885c61ca1681b1199697"
|
||||||
integrity sha512-VyZq0EEw+OILnWk7G3ZgLLPaz1ERaPP++jLjeyLMbFOF+Tr4zHzWKiKDsEV/cT7btLPZbVoR3VX+T9/QubnURQ==
|
integrity sha512-VyZq0EEw+OILnWk7G3ZgLLPaz1ERaPP++jLjeyLMbFOF+Tr4zHzWKiKDsEV/cT7btLPZbVoR3VX+T9/QubnURQ==
|
||||||
|
|
||||||
|
"@napi-rs/canvas-linux-arm64-musl@0.1.87":
|
||||||
|
version "0.1.87"
|
||||||
|
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-linux-arm64-musl/-/canvas-linux-arm64-musl-0.1.87.tgz#31355f0debf35848851be97aa1136905db9cef2a"
|
||||||
|
integrity sha512-CUa5YJjpsFcUxJbtfoQ4bqO/Rq+JU/2RfTNFxx07q1AjuDjCM8+MOOLCvVOV1z3qhl6nKAtjJT0pA0J8EbnK8Q==
|
||||||
|
|
||||||
"@napi-rs/canvas-linux-riscv64-gnu@0.1.80":
|
"@napi-rs/canvas-linux-riscv64-gnu@0.1.80":
|
||||||
version "0.1.80"
|
version "0.1.80"
|
||||||
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-linux-riscv64-gnu/-/canvas-linux-riscv64-gnu-0.1.80.tgz#927a3b859a0e3c691beaf52a19bc4736c4ffc9b8"
|
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-linux-riscv64-gnu/-/canvas-linux-riscv64-gnu-0.1.80.tgz#927a3b859a0e3c691beaf52a19bc4736c4ffc9b8"
|
||||||
@@ -857,6 +887,11 @@
|
|||||||
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-linux-riscv64-gnu/-/canvas-linux-riscv64-gnu-0.1.84.tgz#787c9c207f69aaa51b852c7063a6eed2985b7fca"
|
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-linux-riscv64-gnu/-/canvas-linux-riscv64-gnu-0.1.84.tgz#787c9c207f69aaa51b852c7063a6eed2985b7fca"
|
||||||
integrity sha512-PSMTh8DiThvLRsbtc/a065I/ceZk17EXAATv9uNvHgkgo7wdEfTh2C3aveNkBMGByVO3tvnvD5v/YFtZL07cIg==
|
integrity sha512-PSMTh8DiThvLRsbtc/a065I/ceZk17EXAATv9uNvHgkgo7wdEfTh2C3aveNkBMGByVO3tvnvD5v/YFtZL07cIg==
|
||||||
|
|
||||||
|
"@napi-rs/canvas-linux-riscv64-gnu@0.1.87":
|
||||||
|
version "0.1.87"
|
||||||
|
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-linux-riscv64-gnu/-/canvas-linux-riscv64-gnu-0.1.87.tgz#031d3d30cf667586ad742c5846a0e086c7ada950"
|
||||||
|
integrity sha512-5KM4dBFEzFMNkJV2rheIQWpd+mRZA7VNDnxTT7nsCEf6DUjUnf6Hssq9bAwjVYTe4jqraDHbWRbF4uXLBLRFJg==
|
||||||
|
|
||||||
"@napi-rs/canvas-linux-x64-gnu@0.1.80":
|
"@napi-rs/canvas-linux-x64-gnu@0.1.80":
|
||||||
version "0.1.80"
|
version "0.1.80"
|
||||||
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-linux-x64-gnu/-/canvas-linux-x64-gnu-0.1.80.tgz#25c0416bcedd6fadc15295e9afa8d9697232050c"
|
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-linux-x64-gnu/-/canvas-linux-x64-gnu-0.1.80.tgz#25c0416bcedd6fadc15295e9afa8d9697232050c"
|
||||||
@@ -867,6 +902,11 @@
|
|||||||
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-linux-x64-gnu/-/canvas-linux-x64-gnu-0.1.84.tgz#fb6eaea81ce679575b5004bc2177ce2d9222642b"
|
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-linux-x64-gnu/-/canvas-linux-x64-gnu-0.1.84.tgz#fb6eaea81ce679575b5004bc2177ce2d9222642b"
|
||||||
integrity sha512-N1GY3noO1oqgEo3rYQIwY44kfM11vA0lDbN0orTOHfCSUZTUyiYCY0nZ197QMahZBm1aR/vYgsWpV74MMMDuNA==
|
integrity sha512-N1GY3noO1oqgEo3rYQIwY44kfM11vA0lDbN0orTOHfCSUZTUyiYCY0nZ197QMahZBm1aR/vYgsWpV74MMMDuNA==
|
||||||
|
|
||||||
|
"@napi-rs/canvas-linux-x64-gnu@0.1.87":
|
||||||
|
version "0.1.87"
|
||||||
|
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-linux-x64-gnu/-/canvas-linux-x64-gnu-0.1.87.tgz#9fcc1dd2574aebe7e57797196712106665948dd0"
|
||||||
|
integrity sha512-zSv+ozz9elT5YhocyogX5LwVYURChO4QGD6CQIW6OnuNA0UOMDD/b4wDzlJiMphISy3EVTntlKFhe4W3EuKcxw==
|
||||||
|
|
||||||
"@napi-rs/canvas-linux-x64-musl@0.1.80":
|
"@napi-rs/canvas-linux-x64-musl@0.1.80":
|
||||||
version "0.1.80"
|
version "0.1.80"
|
||||||
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-linux-x64-musl/-/canvas-linux-x64-musl-0.1.80.tgz#de85d644e09120a60996bbe165cc2efee804551b"
|
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-linux-x64-musl/-/canvas-linux-x64-musl-0.1.80.tgz#de85d644e09120a60996bbe165cc2efee804551b"
|
||||||
@@ -877,6 +917,16 @@
|
|||||||
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-linux-x64-musl/-/canvas-linux-x64-musl-0.1.84.tgz#54a37352a0be7f2a7218b6f11d8ae9b5cdbb3d6c"
|
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-linux-x64-musl/-/canvas-linux-x64-musl-0.1.84.tgz#54a37352a0be7f2a7218b6f11d8ae9b5cdbb3d6c"
|
||||||
integrity sha512-vUZmua6ADqTWyHyei81aXIt9wp0yjeNwTH0KdhdeoBb6azHmFR8uKTukZMXfLCC3bnsW0t4lW7K78KNMknmtjg==
|
integrity sha512-vUZmua6ADqTWyHyei81aXIt9wp0yjeNwTH0KdhdeoBb6azHmFR8uKTukZMXfLCC3bnsW0t4lW7K78KNMknmtjg==
|
||||||
|
|
||||||
|
"@napi-rs/canvas-linux-x64-musl@0.1.87":
|
||||||
|
version "0.1.87"
|
||||||
|
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-linux-x64-musl/-/canvas-linux-x64-musl-0.1.87.tgz#170b7fe76f1b52a947bc287cd3c84f15d210b863"
|
||||||
|
integrity sha512-jTNmicAZQ70X+cbjZz6G6w8lmORwxRBmj/U20ECNYvcWVLshgyCKWPFL2I0Z6pkJve0vZWls6oZ15iccm1sv8w==
|
||||||
|
|
||||||
|
"@napi-rs/canvas-win32-arm64-msvc@0.1.87":
|
||||||
|
version "0.1.87"
|
||||||
|
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-win32-arm64-msvc/-/canvas-win32-arm64-msvc-0.1.87.tgz#6536946c5a1796b0781700ee6b39ac18fc0e96b2"
|
||||||
|
integrity sha512-p6J7UNAxKHYc7AL0glEtYuW/E0OLLUNnLti8dA2OT51p08Il4T7yZCl+iNo6f73HntFP+dgOHh2cTXUhmk8GuA==
|
||||||
|
|
||||||
"@napi-rs/canvas-win32-x64-msvc@0.1.80":
|
"@napi-rs/canvas-win32-x64-msvc@0.1.80":
|
||||||
version "0.1.80"
|
version "0.1.80"
|
||||||
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-win32-x64-msvc/-/canvas-win32-x64-msvc-0.1.80.tgz#6bb95885d9377912d71f1372fc1916fb54d6ef0a"
|
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-win32-x64-msvc/-/canvas-win32-x64-msvc-0.1.80.tgz#6bb95885d9377912d71f1372fc1916fb54d6ef0a"
|
||||||
@@ -887,6 +937,11 @@
|
|||||||
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-win32-x64-msvc/-/canvas-win32-x64-msvc-0.1.84.tgz#e0039b89f8e04287c77bd1fb5e6fa671d6c9d3c8"
|
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-win32-x64-msvc/-/canvas-win32-x64-msvc-0.1.84.tgz#e0039b89f8e04287c77bd1fb5e6fa671d6c9d3c8"
|
||||||
integrity sha512-YSs8ncurc1xzegUMNnQUTYrdrAuaXdPMOa+iYYyAxydOtg0ppV386hyYMsy00Yip1NlTgLCseRG4sHSnjQx6og==
|
integrity sha512-YSs8ncurc1xzegUMNnQUTYrdrAuaXdPMOa+iYYyAxydOtg0ppV386hyYMsy00Yip1NlTgLCseRG4sHSnjQx6og==
|
||||||
|
|
||||||
|
"@napi-rs/canvas-win32-x64-msvc@0.1.87":
|
||||||
|
version "0.1.87"
|
||||||
|
resolved "https://registry.yarnpkg.com/@napi-rs/canvas-win32-x64-msvc/-/canvas-win32-x64-msvc-0.1.87.tgz#058b13a9dd3a6180bdfa976b7c8814cfb9df9c8f"
|
||||||
|
integrity sha512-WrwfETMLBRFWkGU8fXU50gCpA2eIjR4NE9JyTKl86Kz5g6SDp0CcuqS2phYtB66TI2HDUhTPbNrk4V7Qf1FOLA==
|
||||||
|
|
||||||
"@napi-rs/canvas@0.1.80":
|
"@napi-rs/canvas@0.1.80":
|
||||||
version "0.1.80"
|
version "0.1.80"
|
||||||
resolved "https://registry.yarnpkg.com/@napi-rs/canvas/-/canvas-0.1.80.tgz#53615bea56fd94e07331ab13caa7a39efc4914ab"
|
resolved "https://registry.yarnpkg.com/@napi-rs/canvas/-/canvas-0.1.80.tgz#53615bea56fd94e07331ab13caa7a39efc4914ab"
|
||||||
@@ -919,6 +974,23 @@
|
|||||||
"@napi-rs/canvas-linux-x64-musl" "0.1.84"
|
"@napi-rs/canvas-linux-x64-musl" "0.1.84"
|
||||||
"@napi-rs/canvas-win32-x64-msvc" "0.1.84"
|
"@napi-rs/canvas-win32-x64-msvc" "0.1.84"
|
||||||
|
|
||||||
|
"@napi-rs/canvas@^0.1.87":
|
||||||
|
version "0.1.87"
|
||||||
|
resolved "https://registry.yarnpkg.com/@napi-rs/canvas/-/canvas-0.1.87.tgz#028e581af4499ee4ca569eb10cb5705526fee68d"
|
||||||
|
integrity sha512-Zb5tePmPMOYBcuNW3NQaVM1sIkvIfel39euiOab/XMjC5Oc/AnPJLa/BacJcToGyIvehecS6eqcsF7i0Wqe1Sw==
|
||||||
|
optionalDependencies:
|
||||||
|
"@napi-rs/canvas-android-arm64" "0.1.87"
|
||||||
|
"@napi-rs/canvas-darwin-arm64" "0.1.87"
|
||||||
|
"@napi-rs/canvas-darwin-x64" "0.1.87"
|
||||||
|
"@napi-rs/canvas-linux-arm-gnueabihf" "0.1.87"
|
||||||
|
"@napi-rs/canvas-linux-arm64-gnu" "0.1.87"
|
||||||
|
"@napi-rs/canvas-linux-arm64-musl" "0.1.87"
|
||||||
|
"@napi-rs/canvas-linux-riscv64-gnu" "0.1.87"
|
||||||
|
"@napi-rs/canvas-linux-x64-gnu" "0.1.87"
|
||||||
|
"@napi-rs/canvas-linux-x64-musl" "0.1.87"
|
||||||
|
"@napi-rs/canvas-win32-arm64-msvc" "0.1.87"
|
||||||
|
"@napi-rs/canvas-win32-x64-msvc" "0.1.87"
|
||||||
|
|
||||||
"@next/env@16.0.7":
|
"@next/env@16.0.7":
|
||||||
version "16.0.7"
|
version "16.0.7"
|
||||||
resolved "https://registry.yarnpkg.com/@next/env/-/env-16.0.7.tgz#eda56377a865d890d25122257d2b8a85b81d6d3d"
|
resolved "https://registry.yarnpkg.com/@next/env/-/env-16.0.7.tgz#eda56377a865d890d25122257d2b8a85b81d6d3d"
|
||||||
@@ -1312,6 +1384,11 @@
|
|||||||
resolved "https://registry.yarnpkg.com/@tokenizer/token/-/token-0.3.0.tgz#fe98a93fe789247e998c75e74e9c7c63217aa276"
|
resolved "https://registry.yarnpkg.com/@tokenizer/token/-/token-0.3.0.tgz#fe98a93fe789247e998c75e74e9c7c63217aa276"
|
||||||
integrity sha512-OvjF+z51L3ov0OyAU0duzsYuvO01PH7x4t6DJx+guahgTnBHkhJdG7soQeTSFLWN3efnHyibZ4Z8l2EuWwJN3A==
|
integrity sha512-OvjF+z51L3ov0OyAU0duzsYuvO01PH7x4t6DJx+guahgTnBHkhJdG7soQeTSFLWN3efnHyibZ4Z8l2EuWwJN3A==
|
||||||
|
|
||||||
|
"@toolsycc/json-repair@^0.1.22":
|
||||||
|
version "0.1.22"
|
||||||
|
resolved "https://registry.yarnpkg.com/@toolsycc/json-repair/-/json-repair-0.1.22.tgz#7ad0eb30c4ef1c4286ad3487dc1bbda562f09986"
|
||||||
|
integrity sha512-IMrsxovS9a5pWGRxMCDQDW8FKKEZI/yK/HMcyJlbnd/s+Mk0dRtGr1BFicL276gDsPvb/JfNHtHSi1oc0eY1jA==
|
||||||
|
|
||||||
"@types/better-sqlite3@^7.6.12":
|
"@types/better-sqlite3@^7.6.12":
|
||||||
version "7.6.12"
|
version "7.6.12"
|
||||||
resolved "https://registry.yarnpkg.com/@types/better-sqlite3/-/better-sqlite3-7.6.12.tgz#e5712d46d71097dcc2775c0b068072eadc15deb7"
|
resolved "https://registry.yarnpkg.com/@types/better-sqlite3/-/better-sqlite3-7.6.12.tgz#e5712d46d71097dcc2775c0b068072eadc15deb7"
|
||||||
|
|||||||
Reference in New Issue
Block a user