banterbot.managers package

banterbot.managers.azure_neural_voice_manager module

class banterbot.managers.azure_neural_voice_manager.AzureNeuralVoiceManager[source]

Bases: object

Management utility for loading Microsoft Azure Cognitive Services Neural Voice models from the Speech SDK. Only one instance per name is permitted to exist at a time, and loading occurs lazily, meaning that when the voices are downloaded, they are subsequently stored in cache as instances of class AzureNeuralVoice, and all future calls refer to these same cached instances.

classmethod data() dict[str, AzureNeuralVoiceProfile][source]

Access the data dictionary, downloading it first using the _download classmethod if necessary.

Returns:

A dict containing the downloaded AzureNeuralVoiceProfile instances.

Return type:

dict[str, AzureNeuralVoiceProfile]

classmethod list_countries() list[str][source]

Returns a list of two-character country codes (e.g., us, fr, etc.)

Returns:

A list of country codes.

Return type:

list[str]

classmethod list_genders() list[str][source]

Returns a list of available voice genders

Returns:

A list of genders.

Return type:

list[str]

classmethod list_languages() list[str][source]

Returns a list of two-character language codes (e.g., en, fr, etc.)

Returns:

A list of language codes.

Return type:

list[str]

classmethod list_locales() list[str][source]

Returns a list of locales, which are language codes followed by countries, in some cases followed by a region, (e.g., en-US fr-FR, etc.).

Returns:

A list of locales.

Return type:

list[str]

classmethod list_regions() list[str][source]

Returns a list of regions (e.g., sichuan, shandong, etc.)

Returns:

A list of regions.

Return type:

list[str]

classmethod list_styles() list[str][source]

Returns a list of styles (e.g., sichuan, shandong, etc.)

Returns:

A list of styles.

Return type:

list[str]

classmethod load(name: str) AzureNeuralVoiceProfile[source]

Retrieve or initialize an AzureNeuralVoice instance by a name in the Neural Voices resource JSON.

Parameters:

name (str) – The name of the voice profile.

Returns:

An AzureNeuralVoice instance loaded with data from the specified name.

Return type:

AzureNeuralVoice

Raises:

KeyError – If the specified name is not found in the resource file defined by config.azure_neural_voices.

classmethod search(gender: str | list[str] | None = None, language: str | list[str] | None = None, country: str | list[str] | None = None, region: str | list[str] | None = None, style: str | list[str] | None = None) list[AzureNeuralVoiceProfile][source]

Search through all the available Microsoft Azure Cognitive Services Neural Voice models using any combination of the provided arguments to get a list of relevant AzureNeuralVoiceProfile instances. For information on searchable languages, countries, and regions, visit:

https://learn.microsoft.com/en-us/azure/ai-services/speech-service/language-support?tabs=tts#supported-languages

Parameters:
  • gender (Optional[Union[list[str], str]]) – Can take the values MALE, FEMALE, and/or UNKNOWN.

  • language (Optional[Union[list[str], str]]) – Can take any language abbreviations (e.g., en, fr, etc.)

  • country (Optional[Union[list[str], str]]) – Can take any country abbreviations (e.g., US, FR, etc.)

  • region (Optional[Union[list[str], str]]) – Can take any region names (e.g., shaanxi, sichuan, etc.)

Returns:

A list of AzureNeuralVoiceProfile instances.

Return type:

list[AzureNeuralVoiceProfile]

banterbot.managers.memory_chain module

class banterbot.managers.memory_chain.MemoryChain(uuid: str, memory_index: dict[str, list[str]])[source]

Bases: object

MemoryChain is a class responsible for managing and handling arrays of memories using Protocol Buffers. It provides functionality to save memories to a binary file, load memories from a binary file, and retrieve memories based on keywords. The MemoryChain class is designed to efficiently store and retrieve memories based on keywords, allowing for quick access to relevant information.

append(memory: Memory) None[source]

Append a memory to the current set of memories. This method is used to add a single memory to the MemoryChain, allowing for the storage of new information. All changes are saved to file as soon as they are made.

Parameters:

memory (Memory) – The memory to append.

classmethod create() Self[source]

Generate a new empty set of memories and associated UUID.

Returns:

A new instance of MemoryChain with an empty set of memories and a unique UUID.

Return type:

MemoryChain

classmethod delete(uuid: str) None[source]

Delete the directory associated with a MemoryChain instance. This method is used to clean up the file system by removing the directory and all its contents, including memory files and the memory index file.

Parameters:

uuid (str) – The UUID associated with this set of memories.

extend(memories: list[Memory]) None[source]

Extend the current set of memories with a list of memories. This method is used to add multiple memories to the MemoryChain at once, allowing for the storage of new information in bulk. All changes are saved to file as soon as they are made.

Parameters:

memories (list[Memory]) – The list of memories to append.

classmethod load(uuid: str) Self[source]

Load the memories from a binary file using Protocol Buffers deserialization and creates a MemoryChain instance. This method is used to load an existing set of memories from a file, allowing for the continuation of a previous session or the sharing of memories between different instances.

Parameters:

uuid (str) – The UUID of the memory files to load.

Returns:

A new instance of MemoryChain with loaded memories.

Return type:

MemoryChain

search(keywords: list[str], fuzzy_threshold: float | None = None) list[Memory][source]

Look up memories based on keywords. This method is used to retrieve memories that are relevant to the specified keywords. It can also perform fuzzy matching, allowing for the retrieval of memories that are similar to the given keywords based on a similarity threshold.

Parameters:
  • keywords (list[str]) – The list of keywords to look up.

  • fuzzy_threshold (Optional[float]) – The threshold for fuzzy matching. If None, only returns exact matches. If

  • provided (a value is)

  • the (memories with keywords that have a similarity score greater than or equal to)

  • returned. (threshold will also be)

Returns:

The list of matching memories.

Return type:

list[Memory]

banterbot.managers.openai_model_manager module

class banterbot.managers.openai_model_manager.OpenAIModelManager[source]

Bases: object

Management utility for loading OpenAI ChatCompletion models from the resource JSON specified by config.openai_models. Only one instance per name is permitted to exist at a time, and loading occurs lazily, meaning that when a name is loaded, it is subsequently stored in cache and all future calls refer to the cached instance.

classmethod list() list[str][source]

List the names of all the available OpenAI ChatCompletion models.

Returns:

A list of names.

Return type:

list[str]

classmethod load(name: str) OpenAIModel[source]

Retrieve or initialize an OpenAIModel instance by a name in the OpenAIModels resource JSON.

Parameters:

name (str) – The name of the OpenAI ChatCompletion model.

Returns:

An OpenAIModel instance loaded with data from the specified name.

Return type:

OpenAIModel

Raises:

KeyError – If the specified name is not found in the resource file defined by config.openai_models.

banterbot.managers.resource_manager module

class banterbot.managers.resource_manager.ResourceManager[source]

Bases: object

An interface to simplify loading resources from the /banterbot/resources/ data directory. In addition to syntactically simplifying the process, this class gives the option to cache the loaded files to reduce overhead on future calls.

classmethod load_csv(filename: str, cache: bool = True, reset: bool = False, encoding: str = 'utf-8', delimiter: str = ',', quotechar: str = '"', dialect: str = 'excel', strict: bool = True) list[list[str]][source]

Load a specified CSV file by filename and return its contents as a nested list of strings.

Parameters:
  • filename (str) – The name of the resource file — should be a CSV file.

  • cache (bool) – If True, cache the loaded data to reduce overhead the next time its loaded.

  • reset (bool) – If set to True, reloads the contents from file, disregarding the current state of the cache.

  • encoding (str) – The type of encoding to use when loading the resource.

  • delimiter (str) – The CSV delimiter character.

  • quotechar (str) – The CSV quote character.

  • dialect (str) – The CSV dialect.

  • strict (bool) – If True, raises an exception when the file is not correctly formatted.

Returns:

The CSV data formatted as a nested list of strings.

Return type:

list[list[str]]

classmethod load_json(filename: str, cache: bool = True, reset: bool = False, encoding: str = 'utf-8') dict[Any][source]

Load a specified JSON file by filename and return its contents as a dictionary.

Parameters:
  • filename (str) – The name of the resource file — should be a JSON file.

  • cache (bool) – If True, cache the loaded data to reduce overhead the next time its loaded.

  • reset (bool) – If set to True, reloads the contents from file, disregarding the current state of the cache.

  • encoding (str) – The type of encoding to use when loading the resource.

Returns:

The JSON data formatted as a dictionary.

Return type:

dict[Any]

classmethod load_raw(filename: str, cache: bool = True, reset: bool = False, encoding: str = 'utf-8') str[source]

Load a specified file by filename and return its contents as a string.

Parameters:
  • filename (str) – The name of the resource file — should including its suffix.

  • cache (bool) – If True, cache the loaded data to reduce overhead the next time its loaded.

  • reset (bool) – If set to True, reloads the contents from file, disregarding the current state of the cache.

  • encoding (str) – The type of encoding to use when loading the resource.

Returns:

The resource file’s contents as a string.

Return type:

str

classmethod reset_cache() None[source]

Reset the entire cache by deleting the contents of the ResourceLoader._raw_data, ResourceLoader._csv_data, and ResourceLoader._json_data dicts.

classmethod reset_csv_cache() None[source]

Reset the CSV data cache by deleting the contents of the ResourceLoader._csv_data dict.

classmethod reset_json_cache() None[source]

Reset the JSON data cache by deleting the contents of the ResourceLoader._json_data dict.

classmethod reset_raw_cache() None[source]

Reset the raw data cache by deleting the contents of the ResourceLoader._raw_data dict.

banterbot.managers.stream_manager module

class banterbot.managers.stream_manager.StreamManager[source]

Bases: object

Manages streaming of data through threads and allows hard or soft interruption of the streamed data.

connect_completion_handler(func: Callable[[list[StreamLogEntry], dict], Any]) None[source]

Connects an optional completion handler function for handling the final result of the parser. The handler function should take a list of StreamLogEntry instances and a dictionary which will contain shared data between the connected functions.

Parameters:

func (Callable[[list[StreamLogEntry], dict], Any]) – The completion handler function to be used.

connect_exception_handler(func: Callable[[list[StreamLogEntry], int, dict], Any]) None[source]

Connects an optional exception handler function for the parser, to be used when the stream iterable is interrupted. The stream exception handler function is provided with the log and the current index for all remaining items in the stream. The handler function should take a list of StreamLogEntry instances, the current index of the log, and a dictionary which will contain shared data between the connected functions.

Parameters:

func (Callable[[list[StreamLogEntry], int, dict], Any]) – The finalizer function to be used.

connect_processor(func: Callable[[list[StreamLogEntry], int, dict], Any]) None[source]

Connects a processor function for processing each streamed item. The stream processor function should take a list of StreamLogEntry instances, the current index of the log, and a dictionary which will contain shared data between the connected functions.

Parameters:

func (Callable[[list[StreamLogEntry], int, dict], Any]) – The stream processor function to be used.

stream(iterable: Iterable[Any], close_stream: Callable | None = None, init_shared_data: dict[str, Any] | None = None) StreamHandler[source]

Starts streaming data from an iterable source in a separate thread.

Parameters:
  • iterable (Iterable[Any]) – The iterable to stream data from.

  • close_stream (Optional[str]) – The method to use for closing the iterable.

  • init_shared_data (Optional[dict[str, Any]]) – The initial shared data to use (key interrupt is reserved.)