Immersitech SDK Documentation
Engage SDK documentation
|
The Immersitech SDK is a C/C++ library that functions as an audio mixer and audio processor featuring 3D spatial audio processing, noise cancellation, and speech enhancement.
The Immersitech SDK is currently made for people who have direct access to raw audio data. If you can access this, the SDK can collect them and return to you a raw audio output buffer that has been processed. Additionally, the Immersitech SDK allows you to change the audio settings in real time for any participant.
Before diving into how to utilize the library, be sure to check out Key Concepts for All Modules to get an understanding of terminology that will be used throughout the documentation.
Let's take a high level look at the small number of simple steps needed to utilize the library:
The first step in your code will be to initialize the Immersitech library. This step will allow Immersitech to set everything up internally for audio processing. To do so, create an imm_library_configuration which allows you to specify the sampling rate, number of channels, etc. Also note that we send an imm_error_code value. This value will store whether or not the initialization was a success, and if not, why. For now we will disable the room layout and websocket features by sending NULL to those configuration files.
Optionally, check the version of your library to ensure you are up to date. You can also check to see if your license is valid and other details like when it may expire.
Now that the library is initialized, we can begin to create rooms. You can pick any room id that you'd like as long as you haven't already used it to create a different room.
Let's add two participants into this room, both with 1 channel input. We can again pick any id as long as there aren't duplicates in the same room. Note that you can have different participant configurations for each participant.
Now that we have some participants in our room, let us start processing audio
This happens in two steps, first input all the Participants audio, then process and generate the output for each participant
The first of the two steps is to add each Participant's audio into the engine when you receive it. Do this once for each participant, as we are establishing this the participant's input audio and this is the audio that should be used when considering this participant as a source.
Please ensure that your input buffer has the correct number of samples and that you enter the number of FRAMES into the function call and not the number of samples. For more clarification on buffer sizes please refer to Understanding Audio Buffer Sizes .
The second step of audio processing is to generate the output for each participant as a listener. This means call the process function once for each participant to generate the stereo output of what that participant should hear.
To do so, simply provide an output buffer in which to store the results. The output buffer data will be formatted the way you specified upon initializing the library. Find more information about the different output formats under imm_library_configuration. Once again, you will want to ensure that the output buffer you provide has enough memory allocated for the number of frames and number of output channels you selected.
And that's it! You can now adjust the features of the audio processing for each participant by using the set state function. There is a full list of the available audio effects and their default states at imm_audio_control
To move a participant in 3D space, you can manually place them in 3D space by setting their position:
If you'd prefer to have the library take care of where to place participants for you, set the room layout of a room and move participants to seats instead. New participants will then be placed in the next best unoccupied seat. Note to use seats and automatic room layouts, you will have to use a room layout configuration file and point to it during the initialization step with imm_initialize_library where we used NULL in this example.
If at any point a participant chooses to leave the call, remove them from the conference:
When a conference is finished, free all the memory for that room:
When you are finished using the Immersitech library, be sure to destroy the library to free the memory allocated during initialization. Do not call this function before you are completely finished using the library:
If you are now looking to examine a fully functional piece of code using the Immersitech Library, please reference the included immersitech_example.c
or noise_cancellation_example.c
.
The Immersitech Library does not require you to have any special dependencies for Mac or Windows.
If you are using Linux, it does however require that you have your C / C++ libraries up to at least:
Ubuntu: GLIBC_2.27
Debian: GLIBC_2.29
If you plan on using the websocket server feature of the library, you will need to install and link the following libraries to your program:
-lcrypto -lssl
In order to use the Immersitech Sound Manager libraries, you will need these files:
The following files are optional for more advanced feature usage:
To use the Immersitech Library, include immersitech.h in your projects and add the functions to your code. You will also need to make sure to link the dynamic library to your project and ensure it is in the location you linked it to. Make sure also in your code that the path you supply to your license file matches the path you gave to the Immersitech Library.
For the full detailed API description, please visit immersitech.h