banterbot.models package
banterbot.models.azure_neural_voice_profile module
- class banterbot.models.azure_neural_voice_profile.AzureNeuralVoiceProfile(country: str, description: str, gender: SynthesisVoiceGender, language: str, locale: str, name: str, short_name: str, style_list: list[str], region: str | None = None)[source]
Bases:
object
A dataclass representing an Azure Neural Voice profile for speech synthesis.
- country
The country where the voice is commonly used.
- Type:
str
- description
A brief description of the voice profile.
- Type:
str
- gender
The gender of the voice.
- Type:
SynthesisVoiceGender
- language
The language of the voice.
- Type:
str
- locale
The name of the language’s locale (i.e., language-country[-region]).
- Type:
str
- name
The name of the voice profile.
- Type:
str
- region
The region where the voice is available or commonly used.
- Type:
str
- short_name
The voice identifier used by Azure Text-to-Speech API.
- Type:
str
- style_list
The available styles (i.e., tones/emotions) for the voice.
- Type:
list[str]
- country: str
- description: str
- gender: SynthesisVoiceGender
- language: str
- locale: str
- name: str
- region: str | None = None
- short_name: str
- style_list: list[str]
banterbot.models.memory module
- class banterbot.models.memory.Memory(keywords: list[str], summary: str, impact: int, timestamp: datetime, messages: list[Message], uuid: str | None = None)[source]
Bases:
object
This class represents a single memory of a persona in the form of a dataclass. A memory is defined by keywords, a summary, an impact score, a timestamp, and associated messages.
- Parameters:
keywords (list[str]) – The list of keywords that summarize the memory.
summary (str) – A brief summary of the memory.
impact (float) – A score to indicate the impact of the memory on the persona (accepts values 1 to 100).
timestamp (datetime.datetime) – The time when the memory occurred.
messages (list[Message]) – The list of messages associated with the memory.
- classmethod deserialize(data: str) Self [source]
Constructs a Memory instance from a serialized string of binary bytes.
- Returns:
The constructed Memory instance.
- Return type:
- classmethod from_protobuf(memory: Memory) Memory [source]
Constructs a Memory instance from a protobuf object.
- Parameters:
memory (memory_pb2.Memory) – The protobuf object to convert.
- Returns:
The constructed Memory instance.
- Return type:
- impact: int
- keywords: list[str]
- serialize() str [source]
Returns a serialized bytes string version of the current Memory instance.
- Returns:
A string containing binary bytes.
- Return type:
str
- summary: str
- timestamp: datetime
- to_protobuf() Memory [source]
Converts this Memory instance into a protobuf object. :param self: The instance of the Memory class.
- Returns:
The protobuf object equivalent of the Memory instance.
- Return type:
memory_pb2.Memory
- uuid: str | None = None
banterbot.models.message module
- class banterbot.models.message.Message(role: ChatCompletionRoles, content: str, name: str | None = None)[source]
Bases:
object
Represents a message that can be sent to the OpenAI ChatCompletion API.
The purpose of this class is to create a structured representation of a message that can be easily converted into a format compatible with the OpenAI API. This class is designed to be used in conjunction with the OpenAI ChatCompletion API to generate context-aware responses from an AI model.
- role
The role of the message sender. - ASSISTANT: Represents a message sent by the AI assistant. - SYSTEM: Represents a message sent by the system, usually containing instructions or context. - USER: Represents a message sent by the user interacting with the AI assistant.
- Type:
- content
The content of the message.
- Type:
str
- name
The name of the message sender. This is an optional field and can be used to provide a
- Type:
Optional[str]
- more personalized experience by addressing the sender by their name.
- content: str
- count_tokens(model: OpenAIModel) int [source]
Counts the number of tokens in the current message.
This method is useful for keeping track of the total number of tokens used in a conversation, as the OpenAI API has a maximum token limit per request. By counting tokens, you can ensure that your conversation stays within the API’s token limit.
- Parameters:
model (OpenAIModel) – The model for which the tokenizer should count tokens. This is an instance of the
class (OpenAIModel)
information. (which contains the tokenizer and other model-specific)
- Returns:
The number of tokens in the specified messages. Please note that this count includes tokens for message metadata and may vary based on the specific tokenizer used by the model.
- Return type:
int
- classmethod from_protobuf(message_proto: Message) Self [source]
Constructs a Message instance from a protobuf object.
- Parameters:
message_proto (memory_pb2.Message) – The protobuf object to convert.
- Returns:
The constructed Message instance.
- Return type:
- name: str | None = None
- role: ChatCompletionRoles
banterbot.models.number module
banterbot.models.openai_model module
- class banterbot.models.openai_model.OpenAIModel(model: str, max_tokens: int, generation: float, rank: int)[source]
Bases:
object
A class representing an OpenAI ChatCompletion model.
- model
The name of the model.
- Type:
str
- max_tokens
The maximum number of tokens supported by the model.
- Type:
int
- generation
The generation number of the model (e.g., GPT-3.5=3.5 and GPT-4=4).
- Type:
int
- rank
The model quality rank; lower values indicate higher quality responses.
- Type:
int
- tokenizer
An instance of the tiktoken package’s Encoding object (i.e., a tokenizer).
- Type:
Encoding
- generation: float
- max_tokens: int
- model: str
- rank: int
banterbot.models.phrase module
- class banterbot.models.phrase.Phrase(text: str, voice: AzureNeuralVoiceProfile, style: str = '', styledegree: str = '', pitch: str = '', rate: str = '', emphasis: str = '')[source]
Bases:
object
Contains processed data for a sub-sentence returned from a ChatCompletion ProsodySelection prompt, ready for SSML interpretation.
- emphasis: str = ''
- pitch: str = ''
- rate: str = ''
- style: str = ''
- styledegree: str = ''
- text: str
- voice: AzureNeuralVoiceProfile
banterbot.models.speech_recognition_input module
- class banterbot.models.speech_recognition_input.SpeechRecognitionInput(data: dict, language: str, offset: timedelta | None = None, duration: timedelta | None = None, offset_end: timedelta | None = None, sents: tuple[str, ...] | None = None, words: list[Word] | None = None, display: str | None = None)[source]
Bases:
object
A class that encapsulates the speech-to-text output data.
- property display: str
A getter property that returns the display form of the recognized speech. The display form is fully processed with Inverse Text Normalization (ITN), Capitalization, Disfluency Removal, and Punctuation.
- Returns:
The display form of the speech.
- Return type:
str
- property duration: timedelta
A getter property that returns the duration of the recognized speech in the audio stream.
- Returns:
The duration in the form of a datetime.timedelta object.
- Return type:
datetime.timedelta
- from_cutoff(cutoff: timedelta) Self [source]
Create a new instance of class SpeechRecognitionInput that only contains the text spoken within a cutoff interval.
- Parameters:
cutoff (datetime.timedelta) – The upper cutoff time (or duration) of the new instance.
- Returns:
The new instance of SpeechRecognitionInput.
- Return type:
- classmethod from_recognition_result(recognition_result: SpeechRecognitionResult, language: str | None = None) Self [source]
Constructor for the SpeechRecognitionInput class. Designed to create lightweight instances with most attributes initially set to None. Computation-intensive operations are performed on-demand when respective properties are accessed, instead of during initialization.
- Parameters:
recognition_result (speechsdk.SpeechRecognitionResult) – The result from a speech recognition event.
language (str, optional) – The language used during the speech-to-text recognition, if not auto-detected.
- property offset: timedelta
A getter property that returns the offset of the recognized speech in the audio stream.
- Returns:
The offset in the form of a datetime.timedelta object.
- Return type:
datetime.timedelta
- property offset_end: timedelta
A getter property that returns the offset + duration of the recognized speech in the audio stream.
- Returns:
The duration + offset in the form of a datetime.timedelta object.
- Return type:
datetime.timedelta
- property sents: tuple[str, ...]
A getter property that returns a list of sentences. If the list is not already computed, it triggers computation.
- Returns:
A list of sentences.
- Return type:
list[str]
banterbot.models.stream_log_entry module
banterbot.models.word module
- class banterbot.models.word.Word(text: str, offset: timedelta, duration: timedelta)[source]
Bases:
object
This class encapsulates a word in the output of a text-to-speech synthesis or input from a speech-to-text recognition. It includes the word itself and the timestamp when the word was spoken. Optionally, its category (e.g., word, punctuation using Azure’s Speech Synthesis Boundary Type), and its confidence score can be included too.
- word
The word that has been synthesized/recognized.
- Type:
str
- offset
Time elapsed between initialization and synthesis/recognition.
- Type:
datetime.timedelta
- duration
Amount of time required for the word to be fully spoken.
- Type:
datetime.timedelta
- duration: timedelta
- offset: timedelta
- text: str