* add eslint config to server
* add break statements to switch case
* add support for browser globals and turn off empty catch blocks
* disable lines with useless try/catch wrappers
* format
* fix no-undef errors
* disbale lines violating no-unsafe-finally
* ignore syncStaticLists.mjs
* use proper null check for creatorId instead of unreachable nullish coalescing
* remove unneeded typescript eslint comment
* make no-unused-private-class-members a warning
* disable line for no-empty-objects
* add new lint script
* fix no-unused-vars violations
* make no-unsued-vars an error
---------
Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
* checkpoint
* test MCP and flows
* add native tool call detection back to LMStudio
* add native tool call loops for Ollama
* Add ablity detection to DMR (regex parse)
* bedrock and generic openai with ENV flag
* deepseek native tool calling
* localAI native function
* groq support
* linting, add litellm and OR native tool calling via flag
* auto model context limit detection for ollama llm provider
* auto model context limit detection for lmstudio llm provider
* Patch Ollama to function and sync context windows like Foundry
* normalize how model context windows are cached from endpoint service
todo: move this into global utility class with MODEL_MAP
eager load models on boot to pre-cache them
add performance model improvements into ollama agent as well as apply n_ctx
* remove debug log
---------
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
* WIP agentic tool call streaming
- OpenAI
- Anthropic
- Azure OpenAI
* WIP rest of providers EXCLUDES Bedrock and GenericOpenAI
* patch untooled complete/streaming to use chatCallback provider from provider class and not assume OpenAI client struct
example: Ollama
* modify ollama to function with its own overrides
normalize completion/stream outputs across providers/untooled
* dev build
* fix message sanization for anthropic agent streaming
* wip fix anthropic agentic streaming sanitization
* patch gemini, webgenui, generic aibitat providers + disable providers unable to test
* refactor anthropic aibitat provider for empty message and tool call formatting
* Add frontend missing prop check
update Azure for streaming support
update Gemini to streamting support on gemini-* models
generic OpenAI disable streaming
verify localAI support
verify NVIDIA Nim support
* DPAIS, remove temp from call, support streaming'
* remove 0 temp to remove possibility of bad temp error/500s/400s
* Patch condition where model is non-streamable and no tools are present or called resulting in the provider `handleFunctionCallChat` being called - which returns a string.
This would then fail in Untooled.complete since response would be a string and not the expected `response.choices?.[0]?.message`
Modified this line to handle both conditions for stream/non-streaming and tool presence or lack thereof
* Allow generic Openai to be streamable since using untooled it should work fine
honor disabled streaming for provider where that concern may apply for regular chats
* rename function and more gemini-specific function to gemini provider
* add comments for readability
.complete on azure should be non-streaming as this is the sync response
* migrate CometAPI, but disable as we cannot test
---------
Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
* Support SQL Agent skill
* add MSSQL agent connector
* Add frontend to agent skills
remove FAKE_DB mock
reset skills to pickup child-skill dynamically
* add prompt examples for tools on untooled
* add better logging on SQL agents
* Wipe toolruns on each chat relay so tools can be used within the same session
* update comments
* add LMStudio agent support (generic) support
"work" with non-tool callable LLMs, highly dependent on system specs
* add comments
* enable few-shot prompting per function for OSS models
* Add Agent support for Ollama models
* improve json parsing for ollama text responses