openai_chat

The OpenAI Chat Integration plugin provides seamless integration between the OpenAI platform and the BIAMI automation framework, enabling businesses to leverage the power of conversational AI within their automation processes.

With this plugin, users can easily incorporate OpenAI's cutting-edge language capabilities into their BIAMI automation processes, allowing them to create intelligent chatbots, natural language interfaces and build intelligent platforms. The integration enables businesses to enhance customer support, automate repetitive tasks, and streamline interactions by leveraging the advanced natural language processing capabilities of OpenAI.

Plugin Properties

  • api_key

    Your Open AI Chat Completions API Key

  • model

    ID of the model to use. See the model endpoint compatibility table for details on which models work with the Chat Completions API.

  • filename

    Response file name.

  • assistant_prompt

    Previous message history.

  • user_prompt

    Question.

  • system_prompt

    Additional instructions.

  • capturedata

    Additional data regarding the response saved in additional files.

    Check the temp directory for files containing _id_, _usage_ and _finish_reason_.

    For more information check OpenAI API reference.

    Possible values: none, id, usage, finish_reason. Values can be mixed in one string.

  • body

    This is optional property for advanced users that would like to use custom requests according to OpenAI API reference.

    Leave empty if unused.

  • temperature

    What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

    Default value = 1.

  • top_p

    An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

    We generally recommend altering this or temperature but not both.

  • max_tokens

    The maximum number of tokens that can be generated in the chat completion. Default=2,500.

    The total length of input tokens and generated tokens is limited by the model's context length.

  • image_urls

    Space-separated user image URLs.

    This is optional and should be used for supported GPT Vision models, example: gpt-4-vision-preview

Examples

  • openai-101

    An example of how to use OpenAI Platform as part of your automation process.