top of page
Search
Writer's pictureHina Dixit

Multimodal Models Explained: Combining Different Data Types for Better AI

Updated: Sep 4, 2023

Multimodal models are a rapidly growing area of artificial intelligence that focuses on combining different data types, such as text, audio, and visual information, to yield more robust and accurate predictions. By integrating information from diverse sources, multimodal models can build a richer and more complete understanding of the underlying data, unlocking new insights and enabling a wide range of applications.





Modality refers to the type of information or the representation format in which information is stored and processed. Each modality represents a specific channel of communication or sensory experience, providing unique insights and information about a given object, concept, or task. Multimodal models are designed to handle and integrate multiple modalities simultaneously, allowing them to build a richer and more comprehensive understanding of the underlying data.


Example of modalities:

  • Visual (from images or videos)

  • Natural language (both spoken or written)

  • Auditory (including voice, sounds and music)

  • Haptics / touch

  • Smell, taste and self-motion

  • Physiological signals

  • Electrocardiogram (ECG), skin conductance

  • Other modalities - Infrared images, depth images, fMRI

Some early techniques and models in this direction:


Hidden Markov Models (HMMs) are statistical models used to represent systems with unobservable or "hidden" states that generate observable data. In the context of multimodal models, HMMs can be employed to model relationships between different modalities and estimate the hidden states that influence the observed data. For example, consider a multimodal system that processes both audio and visual data for speech recognition. In this case, the hidden states could represent the actual spoken words, while the observed data would be the audio signals and the visual information, such as lip movements. An HMM can be used to model the relationship between the audio and visual modalities and estimate the hidden states (spoken words) based on the observed data.




Convolutional Neural Networks (CNNs) are a type of deep learning model primarily used for processing grid-like data, such as images. In the context of multimodal models, CNNs can be employed as a component to process and extract features from visual data, while other types of neural networks or models handle other modalities, such as text or audio. For example, consider a multimodal system that processes both images and text for a task like image captioning or visual question answering. In this case, a CNN can be used to process the image data and extract relevant features, while another neural network, such as a Recurrent Neural Network (RNN) or a Transformer, can process the text data. Once the features are extracted from each modality, they can be combined or fused using various fusion techniques to generate a more accurate and informative prediction.




The New Era: Architecture of Multimodal Models


There are several important aspects to consider for building multimodal models. Some basics here:

  1. Encoder-decoder models: These models consist of an encoder that processes the input data and a decoder that generates the output. They can be adapted for multimodal tasks by using separate encoders for each data type and a shared decoder.

  2. Attention mechanisms: Attention mechanisms allow models to focus on specific parts of the input data, which can be particularly useful for multimodal tasks. For example, a model might use attention to focus on relevant parts of an image while processing text.

  3. Transformer models: Transformers are a type of neural network architecture that have been highly successful in natural language processing tasks. They can be adapted for multimodal tasks by incorporating multiple input types and using attention mechanisms to process the data.


Multimodal Deep Learning models typically consist of multiple neural networks, each specialized in analyzing a particular modality. Multimodal deep learning models are designed to process and analyze data from multiple modalities, such as text, images, audio, and video, simultaneously. These models typically consist of multiple unimodal neural networks, each specialized in processing a specific input modality. For instance, an audiovisual model may have two unimodal networks, one for audio and another for visual data. This individual processing of each modality is known as encoding.


Once the unimodal encoding is completed, the information extracted from each modality must be integrated or fused. Several fusion techniques are available for this purpose, ranging from simple concatenation to more complex attention mechanisms. The choice of fusion technique is crucial for the success of multimodal models, as it determines how the complementary information from different modalities is combined to create a joint representation of the data.


In general, multimodal architectures consist of three parts:

  • Unimodal encoders: These encode individual modalities, with one encoder for each input modality.

  • Fusion network: This component combines the features extracted from each input modality during the encoding phase.

  • Classifier: This part of the architecture accepts the fused data and makes predictions or classifications based on the joint representation of the data.


By processing and integrating information from multiple modalities, multimodal deep learning models can achieve a more comprehensive understanding of the underlying data, leading to improved performance in various applications, such as image captioning, speech recognition, and natural language processing.




Let's dive deeper:


Encoding Stage


The encoding stage involves extracting features from the input data in each modality and converting them into a common representation that can be processed by subsequent layers in the model. The encoder is typically composed of several layers of neural networks that use nonlinear transformations to extract increasingly abstract features from the input data.


Each modality has its own encoder that transforms the input data into a set of feature vectors. The output of each encoder is then combined into a single representation that captures the relevant information from each modality.


Popular approaches for combining the outputs of the individual encoders include concatenating them into a single vector or using attention mechanisms to weigh the contributions of each modality based on their relevance to the task at hand. For example, a Convolutional Neural Network (CNN) can be used to process image data, while a Recurrent Neural Network (RNN) or Transformer can process text data.


Fusion Module


The fusion module combines information from different modalities (e.g., text, image, audio) into a single representation that can be used for downstream tasks such as classification, regression, or generation. The fusion module can take various forms depending on the specific architecture and task at hand.


Common approaches for fusion include using a weighted sum of the modalities' features, where the weights are learned during training, or concatenating the modalities' features and passing them through a neural network to learn a joint representation. In some cases, attention mechanisms can be used to learn which modality should be attended to at each time step. The fusion network can employ various techniques, such as early fusion (concatenating raw data), late fusion (combining outputs of modality-specific networks), or hybrid fusion (a combination of early and late fusion approaches).


The output of unimodal networks is combined using various fusion techniques, such as early fusion, late fusion, or hybrid fusion, to create a joint representation of the data.

  • Early fusion: This approach involves concatenating the raw data from different modalities into a single input vector and feeding it to the network. By combining the data at the input level, early fusion allows the model to learn joint representations directly from the raw data

  • Late fusion: In this approach, separate networks are trained for each modality, and their outputs are combined at a later stage, such as during decision-making or output generation. Late fusion allows the model to learn modality-specific features independently before integrating them to make a final prediction.

  • Hybrid fusion: This method combines elements of both early and late fusion to create a more flexible and adaptable model. For example, a hybrid fusion approach might involve combining the outputs of modality-specific networks with a shared representation learned from concatenated raw data.

Classification


The classification module takes the joint representation generated by the fusion module and uses it to make a prediction or decision. The specific architecture and approach used in the classification module can vary depending on the task and type of data being processed.


In many cases, the classification module takes the form of a neural network, where the joint representation is passed through one or more fully connected layers before the final prediction is made. These layers can include non-linear activation functions, dropout, and other techniques to help prevent overfitting and improve generalization performance.


The output of the classification module depends on the specific task at hand. For example, in a multimodal sentiment analysis task, the output will be a binary decision indicating whether the text and image input is positive or negative. In a multimodal image captioning task, the output might be a sentence describing the content of the image. The classifier can be a fully connected neural network layer, a softmax layer, or any other suitable decision-making component.


In summary, multimodal deep learning involves three main stages: encoding, fusion, and classification. By processing and integrating information from multiple modalities, these models can achieve a more comprehensive understanding of the underlying data, leading to improved performance in various applications, such as image captioning, speech recognition, and natural language processing.


Applications of Multimodal Models


Multimodal learning has found applications in various fields, including speech recognition, autonomous vehicles, and emotion recognition. Some examples of applications include:

  1. Emotion recognition: By combining visual data (facial expressions) with audio data (speech), multimodal models can better understand and recognize human emotions.

  2. Autonomous vehicles: Multimodal models can process and analyze data from multiple sensors, such as cameras, lidar, and radar, to make better decisions and improve the safety of self-driving cars.

  3. Speech recognition: By combining audio and visual data, such as lip movements, multimodal models can improve speech recognition accuracy.


Challenges in Multimodal Learning

Multimodal deep learning aims to solve five core challenges that are active areas of research:

  1. Data fusion: Combining data from different modalities effectively to create a more comprehensive representation of the data.

  2. Alignment: Finding relationships between different modalities, such as matching text descriptions with corresponding images.

  3. Translation: Converting information from one modality to another, such as generating text descriptions from images.

  4. Modality-specific learning: Developing models that can learn from specific modalities and transfer that knowledge to other modalities.

  5. Multitask learning: Training models to perform multiple tasks simultaneously, such as image classification and captioning.


Old School: Combining Models

Combining models is a technique in machine learning that involves using multiple models to improve the performance of a single model[1]. The idea behind combining models is that one model's strengths can compensate for another's weakness, resulting in a more accurate and robust prediction. Ensemble models, stacking, and bagging are techniques used in combining models.


Ensemble models involve training multiple models and aggregating their predictions. Common ensemble techniques include bagging, which trains multiple models on different subsets of the training data, and boosting, which trains models sequentially, with each model focusing on the errors made by the previous model.


Stacking is another technique for combining models, where the predictions of multiple base models are used as input features for a higher-level model, called a meta-learner, which learns how to weigh the base models' predictions to make a final prediction.




Multimodal learning, on the other hand, is a subfield of artificial intelligence that focuses on effectively processing and analyzing data from multiple modalities, such as text, images, audio, and video. In multimodal learning, the goal is to build a more complete and accurate understanding of the underlying data by combining information from different sources.


The main difference between combining models and multimodal learning is that combining models involve using multiple models to improve the performance of a single model, while multimodal learning involves learning from and combining information from multiple data types to enhance the AI's understanding of the data. Both approaches can lead to better AI performance, but they tackle the problem from different perspectives.



Future of Multimodal AI

Multimodal AI models are at the heart of the generative AI boom, with AI image generators like DALL-E, Stable Diffusion, and Midjourney relying on systems that link together text and images during the training stage. These systems look for patterns in visual data while connecting that information to descriptions of the images, enabling them to generate pictures that follow users' text inputs. The same is true for many AI tools that generate video or audio in the same way. In conclusion, multimodal models have the potential to revolutionize AI by combining different data types to enhance the power and precision of machine learning algorithms. As research in this area continues to advance, we can expect to see even more impressive applications and breakthroughs in the field of artificial intelligence.



Citations:

[1] https://www.knowledgehut.com/blog/data-science/combining-models-machine-learning-with-python

[2] https://machinelearningmastery.com/combine-predictions-for-ensemble-learning/

[3] https://aryanbajaj13.medium.com/ensemble-models-how-to-make-better-predictions-by-combining-multiple-models-with-python-codes-6ac54403414e

[4] https://builtin.com/machine-learning/ensemble-model

[5] https://control.com/technical-articles/combining-two-deep-learning-models/



1,146 views0 comments

Recent Posts

See All

Commentaires


Post: Blog2_Post
bottom of page