* add check for timings field on final chunk to override usage data
* refactor: extract llama.cpp timings into reusable private method
Move timings extraction into #extractTimings so it can be shared
by both streaming (handleStream) and non-streaming (getChatCompletion)
code paths.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* lint and cleanup
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
* Refactor LLMPerformanceMonitor to use options object for measureStream parameters
* Refactor invocations of `measureStream` to use options arguments
* Change invocation of `measureStream` in anthropic provider to use options argument
---------
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
* add model tag to chatCompletion
* add modelTag `model` to async streaming
keeps default arguments for prompt token calculation where applied via explict arg
* fix HF default arg
* render all performance metrics as available for backward compatibility
add `timestamp` to both sync/async chat methods
* extract metrics string to function
* Add className property to various LLM and embedder classes to fix logging bug after minification
* Fix bug with this.log method by applying the missing private field symbol
* Add User-Agent header on the requests sent by Generic OpenAI providers.
* Moved getAnythingLLMUserAgent helper fn to server/endpoints/utils.js and changed fallback version string to "unknown"
---------
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
* Enable agent context windows to be accurate per provider:model
* Refactor model mapping to external file
Add token count to document length instead of char-count
refernce promptWindowLimit from AIProvider in central location
* remove unused imports