* Fix typos

* language

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
This commit is contained in:
omahs 2025-05-14 18:30:28 +02:00 committed by GitHub
parent adf7e8a9a7
commit 946be93f08
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
7 changed files with 8 additions and 8 deletions

View File

@ -208,9 +208,9 @@ We will only track usage details that help us make product and roadmap decisions
- When a document is added or removed. No information _about_ the document. Just that the event occurred. This gives us an idea of use. - When a document is added or removed. No information _about_ the document. Just that the event occurred. This gives us an idea of use.
- Type of vector database in use. Let's us know which vector database provider is the most used to prioritize changes when updates arrive for that provider. - Type of vector database in use. This helps us prioritize changes when updates arrive for that provider.
- Type of LLM provider & model tag in use. Let's us know the most popular choice and prioritize changes when updates arrive for that provider or model, or combination thereof. eg: reasoning vs regular, multi-modal models, etc. - Type of LLM provider & model tag in use. This helps us prioritize changes when updates arrive for that provider or model, or combination thereof. eg: reasoning vs regular, multi-modal models, etc.
- When a chat is sent. This is the most regular "event" and gives us an idea of the daily-activity of this project across all installations. Again, only the **event** is sent - we have no information on the nature or content of the chat itself. - When a chat is sent. This is the most regular "event" and gives us an idea of the daily-activity of this project across all installations. Again, only the **event** is sent - we have no information on the nature or content of the chat itself.

View File

@ -1,6 +1,6 @@
# How to deploy a private AnythingLLM instance on AWS # How to deploy a private AnythingLLM instance on AWS
With an AWS account you can easily deploy a private AnythingLLM instance on AWS. This will create a url that you can access from any browser over HTTP (HTTPS not supported). This single instance will run on your own keys and they will not be exposed - however if you want your instance to be protected it is highly recommend that you set a password one setup is complete. With an AWS account you can easily deploy a private AnythingLLM instance on AWS. This will create a url that you can access from any browser over HTTP (HTTPS not supported). This single instance will run on your own keys and they will not be exposed - however if you want your instance to be protected it is highly recommend that you set a password once setup is complete.
**Quick Launch (EASY)** **Quick Launch (EASY)**
1. Log in to your AWS account 1. Log in to your AWS account

View File

@ -1,6 +1,6 @@
# How to deploy a private AnythingLLM instance on DigitalOcean using Terraform # How to deploy a private AnythingLLM instance on DigitalOcean using Terraform
With a DigitalOcean account, you can easily deploy a private AnythingLLM instance using Terraform. This will create a URL that you can access from any browser over HTTP (HTTPS not supported). This single instance will run on your own keys, and they will not be exposed. However, if you want your instance to be protected, it is highly recommended that you set a password one setup is complete. With a DigitalOcean account, you can easily deploy a private AnythingLLM instance using Terraform. This will create a URL that you can access from any browser over HTTP (HTTPS not supported). This single instance will run on your own keys, and they will not be exposed. However, if you want your instance to be protected, it is highly recommended that you set a password once setup is complete.
The output of this Terraform configuration will be: The output of this Terraform configuration will be:
- 1 DigitalOcean Droplet - 1 DigitalOcean Droplet

View File

@ -1,6 +1,6 @@
# How to deploy a private AnythingLLM instance on GCP # How to deploy a private AnythingLLM instance on GCP
With a GCP account you can easily deploy a private AnythingLLM instance on GCP. This will create a url that you can access from any browser over HTTP (HTTPS not supported). This single instance will run on your own keys and they will not be exposed - however if you want your instance to be protected it is highly recommend that you set a password one setup is complete. With a GCP account you can easily deploy a private AnythingLLM instance on GCP. This will create a url that you can access from any browser over HTTP (HTTPS not supported). This single instance will run on your own keys and they will not be exposed - however if you want your instance to be protected it is highly recommend that you set a password once setup is complete.
The output of this cloudformation stack will be: The output of this cloudformation stack will be:
- 1 GCP VM - 1 GCP VM

View File

@ -94,7 +94,7 @@ export default function DrupalWikiOptions() {
Drupal Wiki Space IDs Drupal Wiki Space IDs
</label> </label>
<p className="text-xs font-normal text-theme-text-secondary"> <p className="text-xs font-normal text-theme-text-secondary">
Comma seperated Space IDs you want to extract. See the&nbsp; Comma separated Space IDs you want to extract. See the&nbsp;
<a <a
href="https://help.drupal-wiki.com/node/606" href="https://help.drupal-wiki.com/node/606"
target="_blank" target="_blank"

View File

@ -126,7 +126,7 @@ class EphemeralAgentHandler extends AgentHandler {
* Attempts to find a fallback provider and model to use if the workspace * Attempts to find a fallback provider and model to use if the workspace
* does not have an explicit `agentProvider` and `agentModel` set. * does not have an explicit `agentProvider` and `agentModel` set.
* 1. Fallback to the workspace `chatProvider` and `chatModel` if they exist. * 1. Fallback to the workspace `chatProvider` and `chatModel` if they exist.
* 2. Fallback to the system `LLM_PROVIDER` and try to load the the associated default model via ENV params or a base available model. * 2. Fallback to the system `LLM_PROVIDER` and try to load the associated default model via ENV params or a base available model.
* 3. Otherwise, return null - will likely throw an error the user can act on. * 3. Otherwise, return null - will likely throw an error the user can act on.
* @returns {object|null} - An object with provider and model keys. * @returns {object|null} - An object with provider and model keys.
*/ */

View File

@ -265,7 +265,7 @@ class AgentHandler {
* Attempts to find a fallback provider and model to use if the workspace * Attempts to find a fallback provider and model to use if the workspace
* does not have an explicit `agentProvider` and `agentModel` set. * does not have an explicit `agentProvider` and `agentModel` set.
* 1. Fallback to the workspace `chatProvider` and `chatModel` if they exist. * 1. Fallback to the workspace `chatProvider` and `chatModel` if they exist.
* 2. Fallback to the system `LLM_PROVIDER` and try to load the the associated default model via ENV params or a base available model. * 2. Fallback to the system `LLM_PROVIDER` and try to load the associated default model via ENV params or a base available model.
* 3. Otherwise, return null - will likely throw an error the user can act on. * 3. Otherwise, return null - will likely throw an error the user can act on.
* @returns {object|null} - An object with provider and model keys. * @returns {object|null} - An object with provider and model keys.
*/ */