BanterBot Package
Classes
OpenAIService
- class banterbot.services.openai_service.OpenAIService(model: OpenAIModel)[source]
Bases:
object
A class that handles the interaction with the OpenAI ChatCompletion API. It provides functionality to generate responses from the API based on the input messages. It supports generating responses as a whole or as a stream of response blocks.
The main purpose of this class is to facilitate the communication with the OpenAI API and handle the responses generated by the API. It can be used to create chatbots or other applications that require natural language processing and generation.
- __dict__ = mappingproxy({'__module__': 'banterbot.services.openai_service', '__doc__': '\n A class that handles the interaction with the OpenAI ChatCompletion API. It provides functionality to generate\n responses from the API based on the input messages. It supports generating responses as a whole or as a stream of\n response blocks.\n\n The main purpose of this class is to facilitate the communication with the OpenAI API and handle the responses\n generated by the API. It can be used to create chatbots or other applications that require natural language\n processing and generation.\n ', 'api_key_set': False, 'client': None, '__init__': <function OpenAIService.__init__>, 'interrupt': <function OpenAIService.interrupt>, 'count_tokens': <function OpenAIService.count_tokens>, 'prompt': <function OpenAIService.prompt>, 'prompt_stream': <function OpenAIService.prompt_stream>, 'model': <property object>, '_processor': <function OpenAIService._processor>, '_completion_handler': <function OpenAIService._completion_handler>, '_request': <function OpenAIService._request>, '__dict__': <attribute '__dict__' of 'OpenAIService' objects>, '__weakref__': <attribute '__weakref__' of 'OpenAIService' objects>, '__annotations__': {}})
- __init__(model: OpenAIModel) None [source]
Initializes an OpenAIService instance for a specific model.
- Parameters:
model (contains information about the) – The OpenAI model to be used. This should be an instance of the OpenAIModel class, which
model
limit. (such as its name and maximum token)
- __module__ = 'banterbot.services.openai_service'
- __weakref__
list of weak references to the object (if defined)
- api_key_set = False
- client = None
- count_tokens(string: str) int [source]
Counts the number of tokens in the provided string.
- Parameters:
string (str) – A string provided by the user where the number of tokens are to be counted.
- Returns:
The number of tokens in the string.
- Return type:
int
- interrupt(kill: bool = False) None [source]
Interrupts the current OpenAI ChatCompletion process.
- Parameters:
kill (bool) – Whether the interruption should kill the queues or not.
- property model: OpenAIModel
Return the OpenAIModel associated with the current instance.
- Returns:
OpenAIModel
- prompt(messages: list[Message], split: bool = True, **kwargs) tuple[str] | str [source]
Sends messages to the OpenAI ChatCompletion API and retrieves the response as a list of sentences.
- Parameters:
messages (list[Message]) – A list of messages. Each message should be an instance of the Message class,
role (which contains the content and)
split (bool) – Whether the response should be split into sentences.
**kwargs – Additional parameters for the API request. These can include settings such as temperature, top_p,
frequency_penalty. (and)
- Returns:
A list of sentences forming the response from the OpenAI API. This can be used to display the generated response to the user or for further processing. If split is False, returns a string.
- Return type:
Union[list[str], str]
- prompt_stream(messages: list[Message], init_time: int | None = None, **kwargs) StreamHandler | tuple[()] [source]
Sends messages to the OpenAI API and retrieves the response as a stream of blocks of sentences.
- Parameters:
messages (list[Message]) – A list of messages. Each message should be an instance of the Message class,
role (which contains the content and)
init_time (Optional[int]) – The time at which the stream was initialized.
**kwargs – Additional parameters for the API request. These can include settings such as temperature, top_p,
frequency_penalty. (and)
- Returns:
- A handler for the stream of blocks of sentences forming the response from
the OpenAI API or an empty tuple if the stream was interrupted.
- Return type:
Union[StreamHandler, tuple[()]]
SpeechSynthesisService
- class banterbot.services.speech_synthesis_service.SpeechSynthesisService(output_format: SpeechSynthesisOutputFormat = SpeechSynthesisOutputFormat.Audio16Khz32KBitRateMonoMp3)[source]
Bases:
object
The SpeechSynthesisService class provides an interface to convert text into speech using Azure’s Cognitive Services. It supports various output formats, voices, and speaking styles. The synthesized speech can be interrupted, and the progress can be monitored in real-time.
- __dict__ = mappingproxy({'__module__': 'banterbot.services.speech_synthesis_service', '__doc__': "\n The `SpeechSynthesisService` class provides an interface to convert text into speech using Azure's Cognitive\n Services. It supports various output formats, voices, and speaking styles. The synthesized speech can be\n interrupted, and the progress can be monitored in real-time.\n ", '_synthesis_lock': <unlocked _thread.lock object>, '__init__': <function SpeechSynthesisService.__init__>, 'interrupt': <function SpeechSynthesisService.interrupt>, 'synthesize': <function SpeechSynthesisService.synthesize>, '_init_synthesizer': <function SpeechSynthesisService._init_synthesizer>, '_callback_completed': <function SpeechSynthesisService._callback_completed>, '_callback_started': <function SpeechSynthesisService._callback_started>, '_calculate_offset': <staticmethod(CPUDispatcher(<function SpeechSynthesisService._calculate_offset>))>, '_callback_word_boundary': <function SpeechSynthesisService._callback_word_boundary>, '_callbacks_connect': <function SpeechSynthesisService._callbacks_connect>, '__dict__': <attribute '__dict__' of 'SpeechSynthesisService' objects>, '__weakref__': <attribute '__weakref__' of 'SpeechSynthesisService' objects>, '__annotations__': {'_iterable': 'Optional[SpeechSynthesisHandler]'}})
- __init__(output_format: SpeechSynthesisOutputFormat = SpeechSynthesisOutputFormat.Audio16Khz32KBitRateMonoMp3) None [source]
Initializes an instance of the SpeechSynthesisService class with a specified output format.
- Parameters:
output_format (SpeechSynthesisOutputFormat, optional) – The desired output format for the synthesized speech.
Audio16Khz32KBitRateMonoMp3. (Default is)
- __module__ = 'banterbot.services.speech_synthesis_service'
- __weakref__
list of weak references to the object (if defined)
- interrupt() None [source]
Interrupts the current speech synthesis process.
- Parameters:
kill (bool) – Whether the interruption should kill the queues or not.
- synthesize(phrases: list[Phrase], init_time: int | None = None) Generator[Word, None, None] [source]
Synthesizes the given phrases into speech and returns a handler for the stream of synthesized words.
- Parameters:
phrases (list[Phrase]) – The input phrases that are to be converted into speech.
init_time (Optional[int]) – The time at which the synthesis was initialized.
- Returns:
A handler for the stream of synthesized words.
- Return type:
SpeechRecognitionService
- class banterbot.services.speech_recognition_service.SpeechRecognitionService(languages: str | list[str] | None = None, phrase_list: list[str] | None = None)[source]
Bases:
object
The SpeechRecognitionService class provides an interface to convert spoken language into written text using Azure Cognitive Speech Services. It allows continuous speech recognition and provides real-time results as sentences are recognized.
- __dict__ = mappingproxy({'__module__': 'banterbot.services.speech_recognition_service', '__doc__': '\n The `SpeechRecognitionService` class provides an interface to convert spoken language into written text using Azure\n Cognitive Speech Services. It allows continuous speech recognition and provides real-time results as sentences are\n recognized.\n ', '_recognition_lock': <unlocked _thread.lock object>, '__init__': <function SpeechRecognitionService.__init__>, 'interrupt': <function SpeechRecognitionService.interrupt>, 'phrases_add': <function SpeechRecognitionService.phrases_add>, 'phrases_clear': <function SpeechRecognitionService.phrases_clear>, 'recognize': <function SpeechRecognitionService.recognize>, '_init_recognizer': <function SpeechRecognitionService._init_recognizer>, '_language_kwargs': <function SpeechRecognitionService._language_kwargs>, '_callback_completed': <function SpeechRecognitionService._callback_completed>, '_callback_started': <function SpeechRecognitionService._callback_started>, 'exception_handler': <function SpeechRecognitionService.exception_handler>, '_callback_recognized': <function SpeechRecognitionService._callback_recognized>, '_callbacks_connect': <function SpeechRecognitionService._callbacks_connect>, '__dict__': <attribute '__dict__' of 'SpeechRecognitionService' objects>, '__weakref__': <attribute '__weakref__' of 'SpeechRecognitionService' objects>, '__annotations__': {}})
- __init__(languages: str | list[str] | None = None, phrase_list: list[str] | None = None) None [source]
Initializes the SpeechRecognitionService instance by setting up the Azure Cognitive Services speech configuration and recognizer. Argument recognition_language can take one or more values, each representing a language the recognizer can expect to receive as an input. The recognizer will attempt to auto-detect the language if multiple are provided.
- Parameters:
languages (Union[str, list[str]) – The language(s) the speech-to-text recognizer expects to hear.
phrase_list (list[str], optional) – Optionally provide the recognizer with context to improve recognition.
- __module__ = 'banterbot.services.speech_recognition_service'
- __weakref__
list of weak references to the object (if defined)
- exception_handler(log: list[StreamLogEntry], index: int, shared_data: dict)[source]
Handles exceptions that occur during the processing of the stream log.
- interrupt(kill: bool = False) None [source]
Interrupts the current speech recognition process.
- Parameters:
kill (bool) – Whether the interruption should kill the queues or not.
- phrases_add(phrases: list[str]) None [source]
Add a new phrase to the PhraseListGrammar instance, which implements a bias towards the specified words/phrases that can help improve speech recognition in circumstances where there may be potential ambiguity.
- Parameters:
phrases (list[str]) – Provide the recognizer with additional text context to improve recognition.
- recognize(init_time: int | None = None) StreamHandler | tuple[()] [source]
Recognizes speech and returns a generator that yields the recognized sentences as they are processed.
- Parameters:
init_time (Optional[int]) – The time at which the recognition was initialized.
- Returns:
- A handler for the stream of recognized sentences, or an empty tuple if the
recognition was interrupted.
- Return type:
Union[StreamHandler, tuple[()]
Interface
- class banterbot.extensions.interface.Interface(model: OpenAIModel, voice: AzureNeuralVoiceProfile, languages: str | list[str] | None = None, system: str | None = None, tone_model: OpenAIModel | None = None, phrase_list: list[str] | None = None, assistant_name: str | None = None)[source]
Bases:
ABC
Interface is an abstract base class for creating frontends for the BanterBot application. It provides a high-level interface for managing conversation with the bot, including sending messages, receiving responses, and updating a conversation area. The interface supports both text and speech-to-text input for user messages.
- __abstractmethods__ = frozenset({'_init_gui', 'run', 'update_conversation_area'})
- __dict__ = mappingproxy({'__module__': 'banterbot.extensions.interface', '__doc__': '\n Interface is an abstract base class for creating frontends for the BanterBot application. It provides a high-level\n interface for managing conversation with the bot, including sending messages, receiving responses, and updating a\n conversation area. The interface supports both text and speech-to-text input for user messages.\n ', '__init__': <function Interface.__init__>, 'interrupt': <function Interface.interrupt>, 'listener_activate': <function Interface.listener_activate>, 'listener_deactivate': <function Interface.listener_deactivate>, 'prompt': <function Interface.prompt>, 'send_message': <function Interface.send_message>, 'system_prompt': <function Interface.system_prompt>, 'update_conversation_area': <function Interface.update_conversation_area>, 'run': <function Interface.run>, '_init_gui': <function Interface._init_gui>, '_append_to_chat_log': <function Interface._append_to_chat_log>, 'respond': <function Interface.respond>, '_listen': <function Interface._listen>, '__dict__': <attribute '__dict__' of 'Interface' objects>, '__weakref__': <attribute '__weakref__' of 'Interface' objects>, '__abstractmethods__': frozenset({'run', '_init_gui', 'update_conversation_area'}), '_abc_impl': <_abc._abc_data object>, '__annotations__': {'_messages': 'list[Message]'}})
- __init__(model: OpenAIModel, voice: AzureNeuralVoiceProfile, languages: str | list[str] | None = None, system: str | None = None, tone_model: OpenAIModel | None = None, phrase_list: list[str] | None = None, assistant_name: str | None = None) None [source]
Initialize the Interface with the specified model and voice.
- Parameters:
model (OpenAIModel) – The OpenAI model to use for generating responses.
voice (AzureNeuralVoice) – The voice to use for text-to-speech synthesis.
languages (Optional[Union[str, list[str]]]) – The languages supported by the speech-to-text recognizer.
system (Optional[str]) – An initialization prompt that can be used to set the scene.
tone_model (OpenAIModel) – The OpenAI ChatCompletion model to use for tone evaluation.
phrase_list (list[str], optional) – Optionally provide the recognizer with context to improve recognition.
assistant_name (str, optional) – Optionally provide a name for the character.
- __module__ = 'banterbot.extensions.interface'
- __weakref__
list of weak references to the object (if defined)
- interrupt(shutdown_time: int | None = None) None [source]
Interrupts all speech-to-text recognition, text-to-speech synthesis, and OpenAI API streams.
- Parameters:
soft (bool) – If True, allows the recognizer to keep processing data that was recorded prior to interruption.
shutdown_time (Optional[int]) – The time at which the listener was deactivated.
- listener_activate(name: str | None = None) None [source]
Activate the speech-to-text listener.
- Parameters:
name (Optional[str]) – The name of the user sending the message. Defaults to None.
- prompt(message: str, name: str | None = None) None [source]
Prompt the bot with the specified user message.
- Parameters:
message (str) – The message content from the user.
name (Optional[str]) – The name of the user sending the message. Defaults to None.
- respond(init_time: int) None [source]
Get a response from the bot and update the conversation area with the response. This method handles generating the bot’s response using the OpenAIService and updating the conversation area with the response text using text-to-speech synthesis.
- abstract run() None [source]
Run the frontend application. This method should be implemented by subclasses to handle the main event loop of the specific GUI framework.
- send_message(content: str, role: ChatCompletionRoles = ChatCompletionRoles.USER, name: str | None = None, hidden: bool = False) None [source]
Send a message from the user to the conversation.
- Parameters:
message (str) – The message content from the user.
role (ChatCompletionRoles) – The role (USER, ASSISTANT, SYSTEM) associated with the content.
name (Optional[str]) – The name of the user sending the message. Defaults to None.
hidden (bool) – If True, does not display the message in the interface.
- system_prompt(message: str, name: str | None = None) None [source]
Prompt the bot with the specified message, issuing a command which is not displayed in the conversation area.
- Parameters:
message (str) – The message content from the user.
- abstract update_conversation_area(word: str) None [source]
Update the conversation area with the specified word, and add the word to the chat log. This method should be implemented by subclasses to handle updating the specific GUI components.
- Parameters:
word (str) – The word to add to the conversation area.
TKInterface
- class banterbot.gui.tk_interface.TKInterface(model: OpenAIModel | None = None, voice: AzureNeuralVoiceProfile | None = None, languages: str | list[str] | None = None, tone_model: OpenAIModel | None = None, system: str | None = None, phrase_list: list[str] | None = None, assistant_name: str | None = None)[source]
Bases:
Tk
,Interface
A graphical user interface (GUI) class that enables interaction with the BanterBot chatbot in a multiplayer mode. It supports functionalities such as text input, text-to-speech and speech-to-text capabilities for up to 9 users simultaneously, based on OpenAI and Azure services.
This class inherits from tkinter’s Tk class and a custom Interface class, allowing it to be displayed as a standalone window and follow a specific chatbot interaction protocol respectively.
- __abstractmethods__ = frozenset({})
- __annotations__ = {'_messages': 'list[Message]'}
- __init__(model: OpenAIModel | None = None, voice: AzureNeuralVoiceProfile | None = None, languages: str | list[str] | None = None, tone_model: OpenAIModel | None = None, system: str | None = None, phrase_list: list[str] | None = None, assistant_name: str | None = None) None [source]
Initialize the TKInterface class, which inherits from both tkinter.Tk and Interface.
- Parameters:
model (OpenAIModel, optional) – The OpenAI model to be used for generating responses.
voice (AzureNeuralVoice, optional) – The Azure Neural Voice to be used for text-to-speech.
languages (Optional[Union[str, list[str]]]) – The languages supported by the speech-to-text recognizer.
tone_model (OpenAIModel) – The OpenAI ChatCompletion model to use for tone evaluation.
system (Optional[str]) – An initialization prompt that can be used to set the scene.
phrase_list (list[str], optional) – Optionally provide the recognizer with context to improve recognition.
assistant_name (str, optional) – Optionally provide a name for the character.
- __module__ = 'banterbot.gui.tk_interface'
- listener_activate(idx: int) None [source]
Activate the speech-to-text listener.
- Parameters:
name (Optional[str]) – The name of the user sending the message. Defaults to None.
- run(greet: bool = False) None [source]
Run the BanterBot application. This method starts the main event loop of the tkinter application.
- Parameters:
greet (bool) – If True, greets the user unprompted on initialization.
- update_conversation_area(word: str) None [source]
Update the conversation area with the specified word, and add the word to the chat log. This method should be implemented by subclasses to handle updating the specific GUI components.
- Parameters:
word (str) – The word to add to the conversation area.
Subpackages
- banterbot.data package
- banterbot.data.enums module
- banterbot.data.prompts module
Greetings
OptionPredictorPrompts
OptionSelectorPrompts
ProsodySelection
ProsodySelection.CHARACTER
ProsodySelection.CONTEXT
ProsodySelection.DUMMY
ProsodySelection.EMPHASIS_ASSISTANT
ProsodySelection.EMPHASIS_USER
ProsodySelection.EXAMPLE_ASSISTANT_1
ProsodySelection.EXAMPLE_ASSISTANT_2
ProsodySelection.EXAMPLE_USER
ProsodySelection.PITCH_ASSISTANT
ProsodySelection.PITCH_USER
ProsodySelection.PREFIX
ProsodySelection.PROMPT
ProsodySelection.RATE_ASSISTANT
ProsodySelection.RATE_USER
ProsodySelection.STYLEDEGREE_ASSISTANT
ProsodySelection.STYLEDEGREE_USER
ProsodySelection.STYLE_ASSISTANT
ProsodySelection.STYLE_USER
ProsodySelection.SUFFIX
SpeechSynthesisPreprocessing
ToneSelection
- banterbot.exceptions package
- banterbot.extensions package
- banterbot.gui package
- banterbot.handlers package
- banterbot.managers package
- banterbot.managers.azure_neural_voice_manager module
AzureNeuralVoiceManager
AzureNeuralVoiceManager.data()
AzureNeuralVoiceManager.list_countries()
AzureNeuralVoiceManager.list_genders()
AzureNeuralVoiceManager.list_languages()
AzureNeuralVoiceManager.list_locales()
AzureNeuralVoiceManager.list_regions()
AzureNeuralVoiceManager.list_styles()
AzureNeuralVoiceManager.load()
AzureNeuralVoiceManager.search()
- banterbot.managers.memory_chain module
- banterbot.managers.openai_model_manager module
- banterbot.managers.resource_manager module
- banterbot.managers.stream_manager module
- banterbot.managers.azure_neural_voice_manager module
- banterbot.models package
- banterbot.models.azure_neural_voice_profile module
AzureNeuralVoiceProfile
AzureNeuralVoiceProfile.country
AzureNeuralVoiceProfile.description
AzureNeuralVoiceProfile.gender
AzureNeuralVoiceProfile.language
AzureNeuralVoiceProfile.locale
AzureNeuralVoiceProfile.name
AzureNeuralVoiceProfile.region
AzureNeuralVoiceProfile.short_name
AzureNeuralVoiceProfile.style_list
AzureNeuralVoiceProfile.country
AzureNeuralVoiceProfile.description
AzureNeuralVoiceProfile.gender
AzureNeuralVoiceProfile.language
AzureNeuralVoiceProfile.locale
AzureNeuralVoiceProfile.name
AzureNeuralVoiceProfile.region
AzureNeuralVoiceProfile.short_name
AzureNeuralVoiceProfile.style_list
- banterbot.models.memory module
- banterbot.models.message module
- banterbot.models.number module
- banterbot.models.openai_model module
- banterbot.models.phrase module
- banterbot.models.speech_recognition_input module
- banterbot.models.stream_log_entry module
- banterbot.models.word module
- banterbot.models.azure_neural_voice_profile module
- banterbot.services package
- banterbot.types package
- banterbot.utils package