* Refactor LLMPerformanceMonitor to use options object for measureStream parameters
* Refactor invocations of `measureStream` to use options arguments
* Change invocation of `measureStream` in anthropic provider to use options argument
---------
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
* add model tag to chatCompletion
* add modelTag `model` to async streaming
keeps default arguments for prompt token calculation where applied via explict arg
* fix HF default arg
* render all performance metrics as available for backward compatibility
add `timestamp` to both sync/async chat methods
* extract metrics string to function
* add microsoft foundry local llm and agent providers
* minor change to fix early stop token + overloading of context window
always use user defined window _unless_ it is larger than the models real contenxt window
cache the context windows when we can from the API (0.7.*)+
Unload model forcefully on model change to prevent resource hogging
* add back token preference since some models have very large windows and can crash a machine
normalize cases
---------
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>