banterbot.services package

banterbot.services.openai_service module

class banterbot.services.openai_service.OpenAIService(model: OpenAIModel)[source]

Bases: object

A class that handles the interaction with the OpenAI ChatCompletion API. It provides functionality to generate responses from the API based on the input messages. It supports generating responses as a whole or as a stream of response blocks.

The main purpose of this class is to facilitate the communication with the OpenAI API and handle the responses generated by the API. It can be used to create chatbots or other applications that require natural language processing and generation.

api_key_set = False
client = None
count_tokens(string: str) int[source]

Counts the number of tokens in the provided string.

Parameters:

string (str) – A string provided by the user where the number of tokens are to be counted.

Returns:

The number of tokens in the string.

Return type:

int

interrupt(kill: bool = False) None[source]

Interrupts the current OpenAI ChatCompletion process.

Parameters:

kill (bool) – Whether the interruption should kill the queues or not.

property model: OpenAIModel

Return the OpenAIModel associated with the current instance.

Returns:

OpenAIModel

prompt(messages: list[Message], split: bool = True, **kwargs) tuple[str] | str[source]

Sends messages to the OpenAI ChatCompletion API and retrieves the response as a list of sentences.

Parameters:
  • messages (list[Message]) – A list of messages. Each message should be an instance of the Message class,

  • role (which contains the content and)

  • split (bool) – Whether the response should be split into sentences.

  • **kwargs – Additional parameters for the API request. These can include settings such as temperature, top_p,

  • frequency_penalty. (and)

Returns:

A list of sentences forming the response from the OpenAI API. This can be used to display the generated response to the user or for further processing. If split is False, returns a string.

Return type:

Union[list[str], str]

prompt_stream(messages: list[Message], init_time: int | None = None, **kwargs) StreamHandler | tuple[()][source]

Sends messages to the OpenAI API and retrieves the response as a stream of blocks of sentences.

Parameters:
  • messages (list[Message]) – A list of messages. Each message should be an instance of the Message class,

  • role (which contains the content and)

  • init_time (Optional[int]) – The time at which the stream was initialized.

  • **kwargs – Additional parameters for the API request. These can include settings such as temperature, top_p,

  • frequency_penalty. (and)

Returns:

A handler for the stream of blocks of sentences forming the response from

the OpenAI API or an empty tuple if the stream was interrupted.

Return type:

Union[StreamHandler, tuple[()]]

banterbot.services.speech_synthesis_service module

class banterbot.services.speech_synthesis_service.SpeechSynthesisService(output_format: SpeechSynthesisOutputFormat = SpeechSynthesisOutputFormat.Audio16Khz32KBitRateMonoMp3)[source]

Bases: object

The SpeechSynthesisService class provides an interface to convert text into speech using Azure’s Cognitive Services. It supports various output formats, voices, and speaking styles. The synthesized speech can be interrupted, and the progress can be monitored in real-time.

interrupt() None[source]

Interrupts the current speech synthesis process.

Parameters:

kill (bool) – Whether the interruption should kill the queues or not.

synthesize(phrases: list[Phrase], init_time: int | None = None) Generator[Word, None, None][source]

Synthesizes the given phrases into speech and returns a handler for the stream of synthesized words.

Parameters:
  • phrases (list[Phrase]) – The input phrases that are to be converted into speech.

  • init_time (Optional[int]) – The time at which the synthesis was initialized.

Returns:

A handler for the stream of synthesized words.

Return type:

StreamHandler

banterbot.services.speech_recognition_service module

class banterbot.services.speech_recognition_service.SpeechRecognitionService(languages: str | list[str] | None = None, phrase_list: list[str] | None = None)[source]

Bases: object

The SpeechRecognitionService class provides an interface to convert spoken language into written text using Azure Cognitive Speech Services. It allows continuous speech recognition and provides real-time results as sentences are recognized.

exception_handler(log: list[StreamLogEntry], index: int, shared_data: dict)[source]

Handles exceptions that occur during the processing of the stream log.

interrupt(kill: bool = False) None[source]

Interrupts the current speech recognition process.

Parameters:

kill (bool) – Whether the interruption should kill the queues or not.

phrases_add(phrases: list[str]) None[source]

Add a new phrase to the PhraseListGrammar instance, which implements a bias towards the specified words/phrases that can help improve speech recognition in circumstances where there may be potential ambiguity.

Parameters:

phrases (list[str]) – Provide the recognizer with additional text context to improve recognition.

phrases_clear() None[source]

Clear all phrases from the PhraseListGrammar instance.

recognize(init_time: int | None = None) StreamHandler | tuple[()][source]

Recognizes speech and returns a generator that yields the recognized sentences as they are processed.

Parameters:

init_time (Optional[int]) – The time at which the recognition was initialized.

Returns:

A handler for the stream of recognized sentences, or an empty tuple if the

recognition was interrupted.

Return type:

Union[StreamHandler, tuple[()]