Interface: GenerateOptions
Options for the generation process.
Properties
badExamplesLimit?
optional
badExamplesLimit:number
The maximum number of "bad" examples to include in the prompt to guide the model on what to avoid. These examples help steer the model away from undesired outputs. (optional)
Defined in
goodExamplesLimit?
optional
goodExamplesLimit:number
The maximum number of "good" examples to include in the prompt for model guidance. These examples help the model understand what the ideal output looks like. (optional)
Defined in
model?
optional
model:any
The model to use for text generation, such as an OpenAI model instance. This can be used to override the default model. (optional)
Defined in
save?
optional
save:boolean
Whether the generated content should be saved automatically after the generation process completes. (optional, default is false)
Defined in
stream?
optional
stream:boolean
Whether the response should be streamed incrementally, allowing partial results to be returned before completion. (optional, default is false)
Defined in
temperature?
optional
temperature:number
Temperature for the model generation. Controls the randomness of the generation process. A higher temperature (closer to 1) will produce more random outputs, while a lower temperature (closer to 0) will generate more deterministic responses. (optional, default is 0.9)