mirror of
https://github.com/ItzCrazyKns/Perplexica.git
synced 2025-10-21 14:58:15 +00:00
feat(app): update documentation
This commit is contained in:
83
README.md
83
README.md
@@ -76,6 +76,34 @@ There are mainly 2 ways of installing Perplexica - With Docker, Without Docker.
|
||||
|
||||
### Getting Started with Docker (Recommended)
|
||||
|
||||
Perplexica can be easily run using Docker. Simply run the following command:
|
||||
|
||||
```bash
|
||||
docker run -p 3000:3000 -p 8080:8080 --name perplexica itzcrazykns1337/perplexica:latest
|
||||
```
|
||||
|
||||
This will pull and start the Perplexica container with the bundled SearxNG search engine. Once running, open your browser and navigate to http://localhost:3000. You can then configure your settings (API keys, models, etc.) directly in the setup screen.
|
||||
|
||||
**Note**: The image includes both Perplexica and SearxNG, so no additional setup is required.
|
||||
|
||||
#### Using Perplexica with Your Own SearxNG Instance
|
||||
|
||||
If you already have SearxNG running, you can use the standalone version of Perplexica:
|
||||
|
||||
```bash
|
||||
docker run -p 3000:3000 -e SEARXNG_API_URL=http://your-searxng-url:8080 --name perplexica itzcrazykns1337/perplexica:standalone-latest
|
||||
```
|
||||
|
||||
**Important**: Make sure your SearxNG instance has:
|
||||
- JSON format enabled in the settings
|
||||
- Wolfram Alpha search engine enabled
|
||||
|
||||
Replace `http://your-searxng-url:8080` with your actual SearxNG URL. Then configure your AI provider settings in the setup screen at http://localhost:3000.
|
||||
|
||||
#### Advanced Setup (Building from Source)
|
||||
|
||||
If you prefer to build from source or need more control:
|
||||
|
||||
1. Ensure Docker is installed and running on your system.
|
||||
2. Clone the Perplexica repository:
|
||||
|
||||
@@ -85,39 +113,46 @@ There are mainly 2 ways of installing Perplexica - With Docker, Without Docker.
|
||||
|
||||
3. After cloning, navigate to the directory containing the project files.
|
||||
|
||||
4. Rename the `sample.config.toml` file to `config.toml`. For Docker setups, you need only fill in the following fields:
|
||||
|
||||
- `OPENAI`: Your OpenAI API key. **You only need to fill this if you wish to use OpenAI's models**.
|
||||
- `CUSTOM_OPENAI`: Your OpenAI-API-compliant local server URL, model name, and API key. You should run your local server with host set to `0.0.0.0`, take note of which port number it is running on, and then use that port number to set `API_URL = http://host.docker.internal:PORT_NUMBER`. You must specify the model name, such as `MODEL_NAME = "unsloth/DeepSeek-R1-0528-Qwen3-8B-GGUF:Q4_K_XL"`. Finally, set `API_KEY` to the appropriate value. If you have not defined an API key, just put anything you want in-between the quotation marks: `API_KEY = "whatever-you-want-but-not-blank"` **You only need to configure these settings if you want to use a local OpenAI-compliant server, such as Llama.cpp's [`llama-server`](https://github.com/ggml-org/llama.cpp/blob/master/tools/server/README.md)**.
|
||||
- `OLLAMA`: Your Ollama API URL. You should enter it as `http://host.docker.internal:PORT_NUMBER`. If you installed Ollama on port 11434, use `http://host.docker.internal:11434`. For other ports, adjust accordingly. **You need to fill this if you wish to use Ollama's models instead of OpenAI's**.
|
||||
- `LEMONADE`: Your Lemonade API URL. Since Lemonade runs directly on your local machine (not in Docker), you should enter it as `http://host.docker.internal:PORT_NUMBER`. If you installed Lemonade on port 8000, use `http://host.docker.internal:8000`. For other ports, adjust accordingly. **You need to fill this if you wish to use Lemonade's models**.
|
||||
- `GROQ`: Your Groq API key. **You only need to fill this if you wish to use Groq's hosted models**.`
|
||||
- `ANTHROPIC`: Your Anthropic API key. **You only need to fill this if you wish to use Anthropic models**.
|
||||
- `Gemini`: Your Gemini API key. **You only need to fill this if you wish to use Google's models**.
|
||||
- `DEEPSEEK`: Your Deepseek API key. **Only needed if you want Deepseek models.**
|
||||
- `AIMLAPI`: Your AI/ML API key. **Only needed if you want to use AI/ML API models and embeddings.**
|
||||
|
||||
**Note**: You can change these after starting Perplexica from the settings dialog.
|
||||
|
||||
- `SIMILARITY_MEASURE`: The similarity measure to use (This is filled by default; you can leave it as is if you are unsure about it.)
|
||||
|
||||
5. Ensure you are in the directory containing the `docker-compose.yaml` file and execute:
|
||||
4. Build and run using Docker:
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
docker build -t perplexica .
|
||||
docker run -p 3000:3000 -p 8080:8080 --name perplexica perplexica
|
||||
```
|
||||
|
||||
6. Wait a few minutes for the setup to complete. You can access Perplexica at http://localhost:3000 in your web browser.
|
||||
5. Access Perplexica at http://localhost:3000 and configure your settings in the setup screen.
|
||||
|
||||
**Note**: After the containers are built, you can start Perplexica directly from Docker without having to open a terminal.
|
||||
|
||||
### Non-Docker Installation
|
||||
|
||||
1. Install SearXNG and allow `JSON` format in the SearXNG settings.
|
||||
2. Clone the repository and rename the `sample.config.toml` file to `config.toml` in the root directory. Ensure you complete all required fields in this file.
|
||||
3. After populating the configuration run `npm i`.
|
||||
4. Install the dependencies and then execute `npm run build`.
|
||||
5. Finally, start the app by running `npm run start`
|
||||
1. Install SearXNG and allow `JSON` format in the SearXNG settings. Make sure Wolfram Alpha search engine is also enabled.
|
||||
2. Clone the repository:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/ItzCrazyKns/Perplexica.git
|
||||
cd Perplexica
|
||||
```
|
||||
|
||||
3. Install dependencies:
|
||||
|
||||
```bash
|
||||
npm i
|
||||
```
|
||||
|
||||
4. Build the application:
|
||||
|
||||
```bash
|
||||
npm run build
|
||||
```
|
||||
|
||||
5. Start the application:
|
||||
|
||||
```bash
|
||||
npm run start
|
||||
```
|
||||
|
||||
6. Open your browser and navigate to http://localhost:3000 to complete the setup and configure your settings (API keys, models, SearxNG URL, etc.) in the setup screen.
|
||||
|
||||
**Note**: Using Docker is recommended as it simplifies the setup process, especially for managing environment variables and dependencies.
|
||||
|
||||
|
@@ -4,11 +4,55 @@
|
||||
|
||||
Perplexica’s Search API makes it easy to use our AI-powered search engine. You can run different types of searches, pick the models you want to use, and get the most recent info. Follow the following headings to learn more about Perplexica's search API.
|
||||
|
||||
## Endpoint
|
||||
## Endpoints
|
||||
|
||||
### **POST** `http://localhost:3000/api/search`
|
||||
### Get Available Providers and Models
|
||||
|
||||
**Note**: Replace `3000` with any other port if you've changed the default PORT
|
||||
Before making search requests, you'll need to get the available providers and their models.
|
||||
|
||||
#### **GET** `/api/providers`
|
||||
|
||||
**Full URL**: `http://localhost:3000/api/providers`
|
||||
|
||||
Returns a list of all active providers with their available chat and embedding models.
|
||||
|
||||
**Response Example:**
|
||||
```json
|
||||
{
|
||||
"providers": [
|
||||
{
|
||||
"id": "550e8400-e29b-41d4-a716-446655440000",
|
||||
"name": "OpenAI",
|
||||
"chatModels": [
|
||||
{
|
||||
"name": "GPT 4 Omni Mini",
|
||||
"key": "gpt-4o-mini"
|
||||
},
|
||||
{
|
||||
"name": "GPT 4 Omni",
|
||||
"key": "gpt-4o"
|
||||
}
|
||||
],
|
||||
"embeddingModels": [
|
||||
{
|
||||
"name": "Text Embedding 3 Large",
|
||||
"key": "text-embedding-3-large"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Use the `id` field as the `providerId` and the `key` field from the models arrays when making search requests.
|
||||
|
||||
### Search Query
|
||||
|
||||
#### **POST** `/api/search`
|
||||
|
||||
**Full URL**: `http://localhost:3000/api/search`
|
||||
|
||||
**Note**: Replace `localhost:3000` with your Perplexica instance URL if running on a different host or port
|
||||
|
||||
### Request
|
||||
|
||||
@@ -19,12 +63,12 @@ The API accepts a JSON object in the request body, where you define the focus mo
|
||||
```json
|
||||
{
|
||||
"chatModel": {
|
||||
"provider": "openai",
|
||||
"name": "gpt-4o-mini"
|
||||
"providerId": "550e8400-e29b-41d4-a716-446655440000",
|
||||
"key": "gpt-4o-mini"
|
||||
},
|
||||
"embeddingModel": {
|
||||
"provider": "openai",
|
||||
"name": "text-embedding-3-large"
|
||||
"providerId": "550e8400-e29b-41d4-a716-446655440000",
|
||||
"key": "text-embedding-3-large"
|
||||
},
|
||||
"optimizationMode": "speed",
|
||||
"focusMode": "webSearch",
|
||||
@@ -38,20 +82,19 @@ The API accepts a JSON object in the request body, where you define the focus mo
|
||||
}
|
||||
```
|
||||
|
||||
**Note**: The `providerId` must be a valid UUID obtained from the `/api/providers` endpoint. The example above uses a sample UUID for demonstration.
|
||||
|
||||
### Request Parameters
|
||||
|
||||
- **`chatModel`** (object, optional): Defines the chat model to be used for the query. For model details you can send a GET request at `http://localhost:3000/api/models`. Make sure to use the key value (For example "gpt-4o-mini" instead of the display name "GPT 4 omni mini").
|
||||
- **`chatModel`** (object, optional): Defines the chat model to be used for the query. To get available providers and models, send a GET request to `http://localhost:3000/api/providers`.
|
||||
|
||||
- `provider`: Specifies the provider for the chat model (e.g., `openai`, `ollama`).
|
||||
- `name`: The specific model from the chosen provider (e.g., `gpt-4o-mini`).
|
||||
- Optional fields for custom OpenAI configuration:
|
||||
- `customOpenAIBaseURL`: If you’re using a custom OpenAI instance, provide the base URL.
|
||||
- `customOpenAIKey`: The API key for a custom OpenAI instance.
|
||||
- `providerId` (string): The UUID of the provider. You can get this from the `/api/providers` endpoint response.
|
||||
- `key` (string): The model key/identifier (e.g., `gpt-4o-mini`, `llama3.1:latest`). Use the `key` value from the provider's `chatModels` array, not the display name.
|
||||
|
||||
- **`embeddingModel`** (object, optional): Defines the embedding model for similarity-based searching. For model details you can send a GET request at `http://localhost:3000/api/models`. Make sure to use the key value (For example "text-embedding-3-large" instead of the display name "Text Embedding 3 Large").
|
||||
- **`embeddingModel`** (object, optional): Defines the embedding model for similarity-based searching. To get available providers and models, send a GET request to `http://localhost:3000/api/providers`.
|
||||
|
||||
- `provider`: The provider for the embedding model (e.g., `openai`).
|
||||
- `name`: The specific embedding model (e.g., `text-embedding-3-large`).
|
||||
- `providerId` (string): The UUID of the embedding provider. You can get this from the `/api/providers` endpoint response.
|
||||
- `key` (string): The embedding model key (e.g., `text-embedding-3-large`, `nomic-embed-text`). Use the `key` value from the provider's `embeddingModels` array, not the display name.
|
||||
|
||||
- **`focusMode`** (string, required): Specifies which focus mode to use. Available modes:
|
||||
|
||||
@@ -108,7 +151,7 @@ The response from the API includes both the final message and the sources used t
|
||||
|
||||
#### Streaming Response (stream: true)
|
||||
|
||||
When streaming is enabled, the API returns a stream of newline-delimited JSON objects. Each line contains a complete, valid JSON object. The response has Content-Type: application/json.
|
||||
When streaming is enabled, the API returns a stream of newline-delimited JSON objects using Server-Sent Events (SSE). Each line contains a complete, valid JSON object. The response has `Content-Type: text/event-stream`.
|
||||
|
||||
Example of streamed response objects:
|
||||
|
||||
|
@@ -2,45 +2,80 @@
|
||||
|
||||
To update Perplexica to the latest version, follow these steps:
|
||||
|
||||
## For Docker users
|
||||
## For Docker users (Using pre-built images)
|
||||
|
||||
1. Clone the latest version of Perplexica from GitHub:
|
||||
Simply pull the latest image and restart your container:
|
||||
|
||||
```bash
|
||||
docker pull itzcrazykns1337/perplexica:latest
|
||||
docker stop perplexica
|
||||
docker rm perplexica
|
||||
docker run -p 3000:3000 -p 8080:8080 --name perplexica itzcrazykns1337/perplexica:latest
|
||||
```
|
||||
|
||||
For standalone version:
|
||||
|
||||
```bash
|
||||
docker pull itzcrazykns1337/perplexica:standalone-latest
|
||||
docker stop perplexica
|
||||
docker rm perplexica
|
||||
docker run -p 3000:3000 -e SEARXNG_API_URL=http://your-searxng-url:8080 --name perplexica itzcrazykns1337/perplexica:standalone-latest
|
||||
```
|
||||
|
||||
Once updated, go to http://localhost:3000 and verify the latest changes. Your settings are preserved automatically.
|
||||
|
||||
## For Docker users (Building from source)
|
||||
|
||||
1. Navigate to your Perplexica directory and pull the latest changes:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/ItzCrazyKns/Perplexica.git
|
||||
cd Perplexica
|
||||
git pull origin master
|
||||
```
|
||||
|
||||
2. Navigate to the project directory.
|
||||
|
||||
3. Check for changes in the configuration files. If the `sample.config.toml` file contains new fields, delete your existing `config.toml` file, rename `sample.config.toml` to `config.toml`, and update the configuration accordingly.
|
||||
|
||||
4. Pull the latest images from the registry.
|
||||
2. Rebuild the Docker image:
|
||||
|
||||
```bash
|
||||
docker compose pull
|
||||
docker build -t perplexica .
|
||||
```
|
||||
|
||||
5. Update and recreate the containers.
|
||||
3. Stop and remove the old container, then start the new one:
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
docker stop perplexica
|
||||
docker rm perplexica
|
||||
docker run -p 3000:3000 -p 8080:8080 --name perplexica perplexica
|
||||
```
|
||||
|
||||
6. Once the command completes, go to http://localhost:3000 and verify the latest changes.
|
||||
4. Once the command completes, go to http://localhost:3000 and verify the latest changes.
|
||||
|
||||
## For non-Docker users
|
||||
|
||||
1. Clone the latest version of Perplexica from GitHub:
|
||||
1. Navigate to your Perplexica directory and pull the latest changes:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/ItzCrazyKns/Perplexica.git
|
||||
cd Perplexica
|
||||
git pull origin master
|
||||
```
|
||||
|
||||
2. Navigate to the project directory.
|
||||
2. Install any new dependencies:
|
||||
|
||||
3. Check for changes in the configuration files. If the `sample.config.toml` file contains new fields, delete your existing `config.toml` file, rename `sample.config.toml` to `config.toml`, and update the configuration accordingly.
|
||||
4. After populating the configuration run `npm i`.
|
||||
5. Install the dependencies and then execute `npm run build`.
|
||||
6. Finally, start the app by running `npm run start`
|
||||
```bash
|
||||
npm i
|
||||
```
|
||||
|
||||
3. Rebuild the application:
|
||||
|
||||
```bash
|
||||
npm run build
|
||||
```
|
||||
|
||||
4. Restart the application:
|
||||
|
||||
```bash
|
||||
npm run start
|
||||
```
|
||||
|
||||
5. Go to http://localhost:3000 and verify the latest changes. Your settings are preserved automatically.
|
||||
|
||||
---
|
||||
|
Reference in New Issue
Block a user