This article provides a comprehensive overview of data preprocessing pipelines for the fusion of functional near-infrared spectroscopy (fNIRS) and electroencephalography (EEG), tailored for researchers and drug development professionals.
This article provides a comprehensive overview of data preprocessing pipelines for the fusion of functional near-infrared spectroscopy (fNIRS) and electroencephalography (EEG), tailored for researchers and drug development professionals. It covers the foundational principles of both modalities, explores methodological approaches for integrated analysis—from early fusion to deep learning strategies—and addresses critical troubleshooting steps for artifact correction and data quality control. Furthermore, it outlines validation frameworks and comparative analyses of fusion techniques, highlighting their applications in brain-computer interfaces and clinical neurology. The goal is to serve as a technical guideline for implementing reproducible and effective fNIRS-EEG fusion to advance multimodal brain imaging research.
Electroencephalography (EEG) measures electrical activity generated by the synchronized firing of neuronal populations in the brain. These signals represent the summation of post-synaptic potentials from pyramidal cells that are oriented in parallel, creating electrical fields strong enough to be detected at the scalp surface. The basis of EEG lies in the rhythmic, synchronized oscillations of these neural populations, which produce distinct brain wave patterns classified by their frequency ranges: delta (0.5-4 Hz), theta (4-8 Hz), alpha (8-13 Hz), beta (13-30 Hz), and gamma (>30 Hz) [1] [2].
The alpha rhythm (8-13 Hz), first discovered by Hans Berger in 1929, remains one of the most studied EEG oscillations due to its prominent role in brain function [3]. This rhythm is most evident during wakeful relaxation with closed eyes and is thought to play an inhibitory function by actively suppressing irrelevant brain regions during cognitive processing [3]. Recent research suggests the alpha rhythm may have played a pivotal role in the cognitive evolution of nocturnal mammals, potentially serving to maintain wakefulness during nighttime hours [3]. Studies have shown that alpha oscillations exhibit characteristic patterns across different states: reduced alpha power during cognitive engagement, increased alpha synchronization during internal processing, and specific spatial distributions that vary with age and cognitive status [3].
Q: Why are my reference or ground electrodes showing persistent high impedance or oversaturation?
A: This issue commonly arises from improper electrode-skin contact or individual physiological factors [4].
Q: How can I minimize jitter and latency when synchronizing EEG with other devices like fNIRS?
A: Precise temporal alignment is crucial for multimodal research [5] [6].
Q: Which pre-processing steps have the most significant impact on EEG data quality?
A: Research indicates that signal segmentation and re-referencing methods are particularly critical [7].
Q: What are the primary challenges in fusing EEG with fNIRS data?
A: The fusion of EEG and fNIRS is complicated by their fundamentally different signal origins and artifact profiles [8].
This protocol outlines the methodology for acquiring a simultaneous EEG-fNIRS dataset for brain-computer interface applications, adapted from a publicly available benchmark dataset [9].
This protocol examines alpha rhythm dynamics during emotional experiences using EEG microstate analysis [3].
Table 1: Impact of Different Re-referencing Methods on EEG Data Quality
| Re-referencing Method | Acronym | Key Characteristics | Effect on ERSP Topography |
|---|---|---|---|
| Common Averaged Reference [7] | CAR | Rereferences to the average of all electrodes | Similar to REST and RESIT |
| robust Common Averaged Reference [7] | rCAR | A variant of CAR less sensitive to outliers | Shows most different pattern |
| Reference Electrode Standardization Technique [7] | REST | Estimates reference at infinity using a head model | Similar to CAR and RESIT |
| Reference Electrode Standardization and Interpolation Technique [7] | RESIT | Combines standardization with interpolation | Similar to CAR and REST |
Table 2: Comparative Characteristics of EEG and fNIRS Neuroimaging Techniques
| Characteristic | EEG | fNIRS |
|---|---|---|
| Measured Signal | Electrical activity from synchronized neuronal firing [8] [9] | Hemodynamic response (blood oxygenation) [8] [9] |
| Temporal Resolution | Millisecond level [8] [9] | Slower (seconds) [8] [9] |
| Spatial Resolution | Limited [8] [9] | Better than EEG [8] [9] |
| Primary Artifacts | Ocular (EOG), muscle (EMG) [8] | Systemic physiology (cardiac, respiratory, blood pressure) [8] |
| Main Strength | Direct measure of neural electrical activity with high temporal precision [9] | Better spatial localization and less susceptible to movement artifacts [8] [9] |
Table 3: Essential Materials for EEG-fNIRS Multimodal Research
| Item | Function & Application |
|---|---|
| EEG Electrode Cap | Holds electrodes in standardized positions (10-10/10-20 system) for consistent scalp coverage [9] [10]. |
| Conductive Electrode Gel/Paste | Improves electrical contact between scalp and electrodes, reducing impedance and signal noise [4]. |
| fNIRS Optodes | Sources emit near-infrared light into the head; detectors measure light intensity after tissue absorption [8]. |
| Abrasive Skin Prep Gel | Gently removes dead skin cells and oils to significantly reduce skin-electrode impedance [4]. |
| Lab Streaming Layer (LSL) | Open-source platform for synchronized multimodal data acquisition, critical for temporal alignment of EEG and fNIRS [5] [6]. |
| Reference & Ground Electrodes | Essential for creating a stable electrical reference point; often placed on mastoids or other locations [4] [10]. |
What is the hemodynamic response and how does fNIRS measure it?
Functional near-infrared spectroscopy (fNIRS) is a non-invasive optical neuroimaging technique that measures brain activity by detecting hemodynamic changes associated with neuronal activation. This process relies on neurovascular coupling, where active neuronal tissue triggers a rapid delivery of blood, resulting in localized changes in blood oxygenation [11] [12].
During brain activation, a complex physiological sequence occurs: First, neuronal activity increases local energy demands, initially depleting oxygen and causing a brief rise in deoxygenated hemoglobin. This triggers a subsequent oversupply of cerebral blood flow through local arterial vasodilation. The resulting hemodynamic response typically shows an increase in oxygenated hemoglobin and a decrease in deoxygenated hemoglobin as oxygenated blood flushes through the active region [12] [13].
fNIRS Hemodynamic Response Pathway:
The typical adult hemodynamic response follows a characteristic pattern, often modeled using a canonical hemodynamic response function composed of two Gamma functions to characterize the positive response and undershoot [14]. However, this response can vary between brain regions, across trial repetitions, and among individuals. In newborn populations, for instance, studies have demonstrated a more mixed hemodynamic response compared to adults, potentially due to developing neurovascular coupling mechanisms [13].
How does the Modified Beer-Lambert Law convert light attenuation into hemoglobin concentrations?
The Modified Beer-Lambert Law (MBLL) is the fundamental principle enabling fNIRS to quantify changes in hemoglobin concentrations from light attenuation measurements. This approach generalizes the traditional Beer-Lambert law to account for light scattering in biological tissues [15] [16].
Principles of the Modified Beer-Lambert Law
The MBLL is implemented through these key equations:
Optical Density: ( OD = -\log\left(\frac{I}{I0}\right) = \varepsilon c d DPF + G ) [15] Where ( I0 ) is incident light intensity, ( I ) is detected light intensity, ( \varepsilon ) is the molar extinction coefficient, ( c ) is chromophore concentration, ( d ) is source-detector separation, ( DPF ) is the differential pathlength factor, and ( G ) accounts for light loss due to scattering.
Differential Form: ( \Delta OD = -\log\left(\frac{I(t)}{I0}\right) \approx \langle L \rangle \Delta\mua(t) ) [16] This differential form relates changes in optical density to changes in absorption coefficients, where ( \langle L \rangle ) represents the mean photon pathlength.
For practical calculation of hemoglobin concentrations, the system of equations becomes:
[ \begin{bmatrix} \Delta OD{\lambda1} \ \Delta OD{\lambda2}
d \cdot DPF \cdot \begin{bmatrix} \varepsilon{HbO2}^{\lambda1} & \varepsilon{HbR}^{\lambda1} \ \varepsilon{HbO2}^{\lambda2} & \varepsilon{HbR}^{\lambda2} \end{bmatrix} \cdot \begin{bmatrix} \Delta [HbO_2] \ \Delta [HbR] \end{bmatrix} ]
This allows researchers to solve for the concentration changes of oxyhemoglobin (( \Delta [HbO_2] )) and deoxyhemoglobin (( \Delta [HbR] )) by measuring optical density changes at multiple wavelengths [15] [12].
FAQ 1: Why is my fNIRS signal showing an inverted hemodynamic response?
An inverted hemodynamic response (decrease in HbO₂ instead of increase) can result from several factors:
FAQ 2: How can I distinguish true neural activation from physiological noise?
Physiological noise is a common challenge in fNIRS experiments. Implement these strategies:
FAQ 3: What are the optimal parameters for the canonical hemodynamic response function in fNIRS?
The canonical HRF in fNIRS is typically modeled using two Gamma functions with these key parameters [14]:
| Parameter | Typical Value | Description |
|---|---|---|
| Response Delay | 2-4 seconds | Time to peak response after stimulus onset |
| Undershoot Delay | 8-12 seconds | Time to undershoot minimum |
| Response Dispersion | 1.0-1.5 | Width of the positive response |
| Undershoot Dispersion | 1.5-2.0 | Width of the undershoot |
| Response-to-Undershoot Ratio | 6:1 | Amplitude ratio of response to undershoot |
Optimal parameters vary by brain region, task paradigm, and population. For motor tasks, the peak HbO₂ response typically occurs around 6 seconds post-stimulus [18].
Problem: Poor signal quality across multiple channels
Problem: Inconsistent responses across subjects
Problem: Difficulty interpreting HbO₂ and HbR responses
fNIRS Data Processing Workflow:
Step 1: Convert Raw Intensity to Optical Density
Step 2: Quality Assessment and Channel Exclusion
Step 3: Convert to Hemoglobin Concentrations
Step 4: Filtering and Artifact Removal
Step 5: Epoch Extraction and Analysis
Table: Frequency Filters for Physiological Noise Removal
| Noise Source | Frequency Range | Filter Type | Recommended Cutoff |
|---|---|---|---|
| Cardiac Pulsation | 0.8-2.0 Hz | Low-pass | 0.5-0.7 Hz |
| Respiratory Rate | 0.2-0.5 Hz | Band-stop | 0.2-0.5 Hz |
| Mayer Waves | 0.07-0.13 Hz | High-pass | 0.01-0.05 Hz |
| Very Low Frequency Drift | <0.01 Hz | High-pass | 0.01 Hz |
Table: Key Research Materials for fNIRS Experiments
| Component | Function | Specifications & Considerations |
|---|---|---|
| fNIRS Instrument | Measures light attenuation | CW (continuous wave) most common; FD (frequency domain) and TR (time-resolved) offer additional information [14] [12] |
| Optodes | Light emission and detection | Source-detector separation: 3-4 cm for adults; Material should ensure proper scalp coupling [18] |
| Wavelengths | Chromophore differentiation | Typically 760 nm (sensitive to HbR) and 830-850 nm (sensitive to HbO₂) [14] |
| Head Cap | Optode positioning | Should provide stable positioning while maintaining comfort; Various sizes for population-specific fit |
| Coupling Gel | Improves light transmission | Optional for some systems; Electro-optical gels improve signal quality |
| Digitization System | Spatial registration | 3D digitizers for co-registration with anatomical images; Essential for source localization |
| Quality Metrics | Signal validation | Scalp Coupling Index (SCI), coefficient of variation, signal-to-noise ratio [18] |
For researchers integrating fNIRS with EEG in multimodal studies:
Temporal Alignment
Artifact Handling
Data Fusion Approaches
Experimental Design Considerations
This technical support guide provides the fundamental principles and practical methodologies essential for successful fNIRS research, with particular attention to integration with EEG in multimodal studies. The troubleshooting recommendations address the most common challenges encountered during fNIRS experimentation and data analysis.
Functional near-infrared spectroscopy (fNIRS) and electroencephalography (EEG) are two non-invasive neuroimaging techniques that, when combined, create a powerful tool for neuroscience research. The primary rationale for their fusion lies in their complementary spatiotemporal resolution profiles. EEG measures the electrical activity of neurons with a millisecond-scale temporal resolution, allowing it to capture fast neural dynamics. However, its spatial resolution is poor, on the order of centimeters, due to the blurring effect of the skull and scalp [19]. In contrast, fNIRS (or its high-density version, Diffuse Optical Tomography - DOT) measures hemodynamic changes related to neural activity. It offers a relatively high spatial resolution (millimeter-scale) but suffers from a fundamentally limited temporal resolution because the hemodynamic response it measures evolves over several seconds [19] [8].
This complementarity is crucial for neuroscientific investigations. For instance, if two spatially close neuronal sources are activated sequentially with only a small temporal separation (e.g., 50 ms), using either EEG or fNIRS alone would fail to resolve them correctly. EEG would blur them spatially, while fNIRS would smooth them temporally [19]. Multimodal fusion aims to overcome the inherent limitations of each standalone modality.
FAQ 1: Why can't I resolve two finger-tapping events that are close in time and space with a single modality?
FAQ 2: How do I handle the inherent temporal delay of fNIRS signals relative to EEG?
The hemodynamic response measured by fNIRS has an inherent delay of several seconds compared to the electrical activity captured by EEG. A fixed temporal offset (e.g., 2-8 seconds) is sometimes applied, but this is suboptimal as the delay can vary by subject and task [20].
FAQ 3: What are the best practices for filtering my simultaneous EEG-fNIRS data to maximize fusion quality?
Both signals contain physiological noise, but they manifest differently and require specific filtering approaches. The table below summarizes recommended parameters based on the signal type.
Table 1: Standard Filtering Parameters for EEG-fNIRS Fusion
| Modality | Filter Type | Typical Frequency Bands | Primary Purpose |
|---|---|---|---|
| fNIRS | Band-Pass Filter | 0.01 - 0.1 Hz [21] or 0.05 - 0.7 Hz [22] | Preserve the hemodynamic response while removing cardiac (~1 Hz), respiratory (~0.3 Hz), and very low-frequency drifts. |
| EEG | Band-Pass Filter | 1 Hz (High-Pass) and above [21] | Remove slow drifts and line noise; specific frequency bands (e.g., alpha, beta) are often extracted for analysis. |
FAQ 4: My fNIRS signals are contaminated by strong systemic physiological noise. How can fusion with EEG help?
Cardiac activity, blood pressure changes, and respiration can create noise in fNIRS that masks neural activation. While EEG is also susceptible to physiological artifacts (like ECG and EMG), the same physiological source manifests with distinct characteristics in each modality. Data-driven, unsupervised symmetric fusion methods can exploit these differences to robustly model and reject shared physiological confounders, thereby enhancing the signal-to-noise ratio of the neurally-evoked activity in both modalities [8].
This protocol uses fNIRS/DOT reconstruction as a spatial prior for EEG source localization to resolve spatiotemporally close neural events [19].
Diagram 1: Workflow for Joint EEG-DOT Source Reconstruction
This protocol is an end-to-end deep learning approach for fusing EEG and fNIRS for brain-computer interface (BCI) tasks, explicitly addressing spatial and temporal misalignment [20].
Diagram 2: STA-Net Architecture for Spatiotemporal Alignment
Table 2: Key Materials and Tools for fNIRS-EEG Fusion Research
| Item / Solution | Function / Explanation |
|---|---|
| High-Density DOT (HD-DOT) | An advanced fNIRS setup using multiple source-detector separations with overlapping sensitivity profiles to enable 3D image reconstruction of functional activation with spatial resolution comparable to fMRI [8]. |
| Open-Source Analysis Toolboxes (HOMER3, MNE-Python, EEGlab) | Software packages providing standardized pipelines for data preprocessing, including conversion to optical density/chromophore concentration (fNIRS) and artifact removal/rerferencing (EEG) [22] [21]. |
| SimBio/FieldTrip Toolbox | An environment for advanced EEG (and MEG) forward modeling, used for calculating the leadfield matrix in a realistic head model, which is critical for source reconstruction [19]. |
| ICBM152 Brain Atlas | A standardized, non-linear asymmetric brain template used for generating realistic head models for both EEG and DOT forward modeling in simulation studies [19]. |
| Short-Separation Channels | fNIRS source-detector pairs placed with a small separation (e.g., < 1 cm) to selectively measure systemic physiological noise from the scalp. These signals can be used as regressors to improve the recovery of cerebral signals [8] [23]. |
| Cross-Modal Attention Mechanisms | A deep learning component that allows a model to dynamically focus on the most relevant features from one modality based on the context provided by the other modality, enhancing fusion performance [20] [24]. |
Neurovascular coupling (NVC) describes the fundamental physiological process whereby neural activity triggers subsequent changes in local cerebral blood flow and hemodynamics [25]. This relationship forms the critical link between the electrical brain activity measured by electroencephalography (EEG) and the hemodynamic responses measured by functional near-infrared spectroscopy (fNIRS). In combined EEG-fNIRS studies, understanding NVC is paramount, as it allows researchers to interpret the two distinct signals not as separate phenomena, but as interconnected aspects of the same underlying brain activity. The EEG signal captures the direct, millisecond-scale electrical discharges of neurons, primarily from the cortical surface [26]. Conversely, fNIRS measures the slower, second-scale hemodynamic response—changes in oxygenated (HbO) and deoxygenated hemoglobin (HbR) concentrations—that serves as an indirect marker of this neural activity [27] [25]. This complementary relationship is the cornerstone of multimodal fusion research, enabling a more complete picture of brain function by combining excellent temporal resolution (EEG) with improved spatial localization (fNIRS) [8] [27].
Q1: What are the most effective strategies to minimize motion artifacts in simultaneous EEG-fNIRS recordings?
Motion artifacts present a significant challenge, but their impact can be mitigated through a combination of hardware choices and signal processing techniques. fNIRS is generally more robust to movement than EEG [26]. Key strategies include:
Q2: How can we reliably synchronize EEG and fNIRS data streams from separate devices?
Precise temporal synchronization is essential for valid NVC analysis. The recommended approach is:
Q3: Our fNIRS signals are contaminated by strong systemic physiology (e.g., heart rate, respiration). How can this be addressed in the context of NVC analysis?
Systemic physiological interference is a common confounder in fNIRS, as the technique is sensitive to cardiac pulsation, respiratory fluctuations, and blood pressure changes [8] [30]. Several data-driven approaches can help isolate the NVC-related signal:
Q4: What is the optimal sensor placement strategy to avoid interference between EEG electrodes and fNIRS optodes?
The goal is to achieve co-registration without physical or signal interference.
Q5: Why might we observe a decoupling between EEG and fNIRS signals, and what does it signify?
Observing a decoupling—where the expected correlation between electrical and hemodynamic activity breaks down—is not always an artifact; it can be a significant physiological finding. For example, a study on cognitive-motor interference found that divided attention during a dual-task led to a decreased neurovascular coupling across theta, alpha, and beta EEG rhythms [31]. Furthermore, research on retired athletes with a history of mild traumatic brain injury (mTBI) showed a reduced hemodynamic response compared to controls, suggesting altered cerebral metabolic demands and potentially impaired NVC due to past injuries [25]. Before concluding a physiological decoupling, however, technical causes like those addressed in FAQs Q1-Q3 must be rigorously excluded.
| Artifact Type | Primary Affected Modality | Root Cause | Preventive Solutions | Corrective Processing Methods |
|---|---|---|---|---|
| Motion Artifacts | Both (EEG more susceptible) [26] | Head movement, loose cap fit | Secure, lightweight cap; accelerometer use [28] [29] | Adaptive filtering, motion correction algorithms [8] [29] |
| Systemic Physiology | fNIRS [8] | Cardiac, respiratory, blood pressure cycles | Controlled environment, subject relaxation | Filtering, PCA/ICA, short-separation regression [8] [29] |
| Scalp Hemodynamics | fNIRS | Blood flow changes in skin/scalp | Proper optode pressure & coupling | Use of short-separation channels [8] |
| Ocular/Muscle Artifacts | EEG [8] | Eye blinks (EOG), head/neck muscle (EMG) | Instruct subject to minimize movement | Blind source separation (e.g., ICA), regression [8] |
| Synchronization Errors | Data Fusion | Separate device clocks, software lag | Hardware TTL triggers, Lab Streaming Layer (LSL) [27] [28] | Post-hoc alignment using event markers |
This protocol is designed to study how the brain allocates resources when cognitive and motor tasks are performed simultaneously, a paradigm known to modulate NVC [31].
This protocol is a validated method for assessing the integrity of the NVC response itself and has been used to study populations with suspected NVC impairment, such as those with a history of concussion [25].
| Protocol Name | Primary Research Application | Task Paradigm | Key NVC Metrics | Typical Participant Groups |
|---|---|---|---|---|
| Cognitive-Motor Interference (CMI) [31] | Divided attention, dual-task cost | Sequential single and dual tasks | EEG-fNIRS correlation in theta, alpha, beta bands | Healthy young adults, elderly, clinical populations with attention deficits |
| "Where's Wally" NVC Test [25] | NVC integrity, metabolic demand | Repeated cycles of visual search/rest | Prefrontal O2Hb increase, HHb decrease | Populations with suspected NVC impairment (e.g., mTBI, concussion) |
| n-Back Working Memory | Cognitive workload, executive function | Continuous performance task with varying memory load | Prefrontal HbO amplitude, latency; EEG theta/gamma power | Broad cognitive neuroscience, neuroergonomics, clinical studies |
A rigorous preprocessing pipeline is critical for cleaning the data and enabling a valid analysis of the relationship between EEG and fNIRS signals. The following workflow outlines the key steps for each modality before data fusion.
| Item Name | Function/Application | Technical Specifications |
|---|---|---|
| Integrated EEG-fNIRS Cap | Holds sensors in co-registered positions for simultaneous measurement. | Compatible with 10-20 system; openings for EEG electrodes & fNIRS optodes; dark-colored material to block ambient light [28] [26]. |
| Active EEG Electrodes | Measure electrical brain activity with high signal-to-noise ratio. | Active electrodes (e.g., g.SCARABEO) for reduced preparation time and motion resilience [28]. |
| fNIRS Optodes | Emit near-infrared light and detect reflected light to measure hemodynamics. | Sources (LEDs/lasers) and detectors; typical source-detector separation of 20-30 mm for cerebral measurement [28] [29]. |
| Short-Separation Channels | A specialized type of fNIRS channel for measuring and removing scalp hemodynamics. | Source-detector separation of < 1 cm; critical for robust artifact removal in data-driven analysis [8]. |
| Conductive Electrolyte Gel | Ensures low impedance electrical contact for EEG electrodes. | Saline-based or abrasive gel for wet EEG systems; not required for dry electrodes. |
| Accelerometer | Records head movement to assist in motion artifact correction. | Small, lightweight sensor attached to the cap; provides reference signal for adaptive filtering [29]. |
| Synchronization Hardware/Software | Temporally aligns EEG and fNIRS data streams from the start of recording. | TTL pulse generator, parallel port, or software platform like Lab Streaming Layer (LSL) [27] [28]. |
FAQ 1: What are the primary technical advantages of integrating fNIRS with EEG? The integration of fNIRS and EEG creates a synergistic system that overcomes the inherent limitations of each modality when used alone. Electroencephalography (EEG) records electrical activity from neuronal firing, providing excellent temporal resolution on the order of milliseconds, but suffers from relatively low spatial resolution and sensitivity to electrical noise and motion artifacts [32] [33]. Conversely, functional near-infrared spectroscopy (fNIRS) measures hemodynamic changes (changes in oxyhemoglobin (HbO) and deoxyhemoglobin (HbR) concentrations), providing good spatial resolution and being less susceptible to motion artifacts, but has lower temporal resolution due to the slow nature of the hemodynamic response [32] [34]. By combining them, the dual-modal system provides simultaneous information on both the electrical neural activity and the hemodynamic metabolic response without electromagnetic interference, offering a more complete picture of brain function [32] [35].
FAQ 2: How do I address motion artifacts in my fNIRS-EEG data? Motion artifacts (MAs) are a common challenge that can significantly degrade signal quality. The table below summarizes various correction techniques.
Table 1: Motion Artifact Correction Techniques for fNIRS and EEG
| Technique Category | Specific Methods | Description | Applicability |
|---|---|---|---|
| Algorithmic (fNIRS & EEG) | Wavelet Packet Decomposition (WPD) [36] | Decomposes signals using wavelet packets; effective for single-channel artifact correction. | fNIRS EEG |
| WPD with Canonical Correlation Analysis (WPD-CCA) [36] | A two-stage method; shown to improve ΔSNR by 11.28% (EEG) and 56.82% (fNIRS) over WPD alone [36]. |
fNIRS EEG | |
| Hardware-Based (fNIRS) | Accelerometer-based Active Noise Cancelation (ANC) [37] | Uses accelerometer data as a noise reference for an adaptive filter to clean the fNIRS signal. | fNIRS EEG |
| Data Handling | Channel Rejection [37] | Discarding data segments or entire channels that are heavily corrupted by motion artifacts. | fNIRS EEG |
FAQ 3: What are the main strategies for fusing fNIRS and EEG data? Data fusion can be implemented at three primary levels, each with its own advantages.
Table 2: Data Fusion Strategies for fNIRS-EEG
| Fusion Level | Description | Advantages | Examples |
|---|---|---|---|
| Data-Level Fusion | Direct combination of raw or preprocessed data from both modalities [33]. | Potentially retains the most complete information. | - |
| Feature-Level Fusion | Extracting features from each modality (e.g., EEG band powers, fNIRS HbO/HbR slopes) and concatenating them into a combined feature vector [33] [38]. | Often provides high classification accuracy; widely used and effective. | Combining EEG band powers and fNIRS signal peaks/means for BCI [38]. |
| Decision-Level Fusion | Each modality is processed and classified independently, and the final results are combined (e.g., by voting or weighted averaging) [33]. | Can eliminate redundant information; provides robustness. | Combining SVM classifier outputs for EEG and fNIRS for mental stress detection [33]. |
FAQ 4: What are the key design considerations for an fNIRS-EEG acquisition helmet? The helmet design is critical for signal quality and co-registration. Key considerations include:
FAQ 5: My synchronized data shows temporal misalignment. How can I improve synchronization? Precise synchronization is challenging. There are two primary methods:
This is a common paradigm for testing hybrid BCI systems [33].
This protocol investigates the relationship between electrical and hemodynamic brain activity [35].
The following diagram illustrates a generalized workflow for a multimodal fNIRS-EEG data preprocessing pipeline, integrating the key steps from the protocols above.
Table 3: Essential Materials and Software for fNIRS-EEG Research
| Item Name | Type | Primary Function | Key Details |
|---|---|---|---|
| Multi-modal Acquisition Helmet | Hardware | Holds EEG electrodes and fNIRS optodes in stable, co-registered positions on the scalp. | Custom 3D-printed or thermoplastic designs are superior to standard caps for ensuring consistent probe contact [32]. |
| Unified Data Acquisition System | Hardware | Simultaneously acquires and time-stamps EEG and fNIRS data streams. | Critical for minimizing synchronization error; preferred over loosely coupling two separate systems [32]. |
| Accelerometer | Hardware | Records head movement data concurrently with brain signals. | Serves as a reference signal for hardware-based motion artifact correction algorithms in fNIRS [37]. |
| NIRS Toolbox | Software | A MATLAB-based suite for fNIRS data analysis. | Supports GLM, functional connectivity, and multi-modal analysis; compatible with NIRx data for automatic 3D probe import [39]. |
| Homer2 / Homer3 | Software | Widely used fNIRS analysis packages. | Provide GUI and script-level processing streams; compatible with data in the common *.nirs file format [39]. |
| EEGLAB | Software | A MATLAB toolbox for processing EEG data. | Used for standard preprocessing (filtering, ICA-based artifact removal) and rhythm extraction [35] [40]. |
| Turbo-Satori | Software | Real-time fNIRS analysis software. | Optimized for brain-computer interface (BCI) and neurofeedback research; integrates with NIRx acquisition systems [39]. |
The logical relationships between the core components of an integrated fNIRS-EEG system and the synergistic advantages they create are summarized below.
Functional Near-Infrared Spectroscopy (fNIRS) is a non-invasive neuroimaging technique that measures changes in cerebral blood oxygenation by shining near-infrared light through the skull and into the brain tissue. This light is absorbed differently by oxygenated hemoglobin (HbO) and deoxygenated hemoglobin (HbR), allowing researchers to infer changes in blood flow and oxygenation in specific brain regions. Preprocessing raw fNIRS data is crucial for cleaning the signal and ensuring that subsequent analyses accurately reflect underlying neural activity rather than physiological noise or motion artifacts. This guide outlines standardized steps for converting raw light intensity signals to meaningful HbO and HbR concentrations, with particular attention to the context of multimodal fNIRS-EEG fusion research.
Q1: What is the purpose of converting raw light intensity to optical density?
The initial raw light intensity measurements are influenced by various factors unrelated to brain activity, including instrument properties and ambient light. Converting to optical density (OD) standardizes the signal and provides a more stable baseline for subsequent calculations. The conversion uses the formula: OD = -log10(I/I0), where I is the detected light intensity and I0 is the emitted light intensity [41]. This step is a prerequisite for applying the Modified Beer-Lambert Law.
Q2: Why must I apply a bandpass filter to my fNIRS data? Bandpass filtering is essential for isolating the hemodynamic response related to neural activity by removing unwanted physiological noise. The useful fNIRS signal is typically confined to frequencies below 0.5 Hz, while common physiological noises occur at higher frequencies: heartbeat (~1-2 Hz), respiration (~0.4 Hz), and Mayer waves related to blood pressure (~0.1 Hz) [41]. A typical bandpass filter with cutoffs of 0.01 Hz (or 0.05 Hz) to 0.5 Hz effectively removes these noise components while preserving the signal of interest [22] [41].
Q3: What are motion artifacts, and how can I correct for them? Motion artifacts are sudden, large shifts in the signal caused by head movements, muscle contractions, or other physical activities during recording [41]. They are a major source of noise and can obscure the true hemodynamic response. Several correction methods are available, and the choice depends on your data and software. Common algorithms include:
Q4: How does the Modified Beer-Lambert Law work?
The Modified Beer-Lambert Law (MBLL) relates changes in optical density to changes in the concentration of chromophores (HbO and HbR) in the tissue. It modifies the classic law to account for light scattering in biological tissues. The formula for the change in optical density at a given wavelength is:
ΔOD(λ) = α(λ) * Δc * l * DPF(λ)
Where:
ΔOD(λ) is the change in optical density at wavelength λ.α(λ) is the molar extinction coefficient of the chromophore at wavelength λ.Δc is the change in chromophore concentration.l is the source-detector separation (physical distance).DPF(λ) is the Differential Pathlength Factor, which accounts for the increased path length due to scattering [42] [41].Q5: Why are short source-detector channels important? Short-distance channels (typically with a separation of less than 1 cm) are primarily sensitive to physiological noise in the scalp and skull rather than brain activity [22]. By measuring this superficial noise, they provide a regressor that can be used to remove it from the standard channels (which contain a mixture of brain signal and superficial noise), thereby enhancing the brain-specific signal [43]. This is a key step in improving the quality of fNIRS data.
Q6: What specific considerations exist for fNIRS-EEG fusion? Successful fusion of fNIRS and EEG data requires careful preprocessing to account for the different nature of the signals. fNIRS measures slow hemodynamic changes (requiring high-pass filtering around 0.01-0.05 Hz), while EEG measures fast electrical potentials (often filtered between 0.5-70 Hz) [17] [24]. A major challenge is that many studies incorporate robust artifact handling for EEG, but confounder correction in fNIRS remains limited to basic filtering or motion removal [17]. Furthermore, short-separation measurements for fNIRS are still underutilized in fusion studies [17]. Fusion methods themselves can be categorized as:
The following diagram illustrates the complete, standardized workflow for preprocessing fNIRS data, from raw measurements to analysis-ready hemoglobin concentrations.
OD = -log10(I/I0) [22] [41].Table 1: Parameters and typical values for converting optical density to hemoglobin concentration.
| Parameter | Symbol | Description | Typical Value / Formula |
|---|---|---|---|
| Molar Extinction Coefficient | ε |
Wavelength-specific absorption property of HbO and HbR. | Look-up tables (e.g., for 730 nm & 850 nm) [42]. |
| Source-Detector Separation | l |
Physical distance between light source and detector on the scalp. | 2.5 - 3.0 cm [29]. |
| Differential Pathlength Factor | DPF |
Factor correcting for increased photon pathlength due to scattering. | Wavelength- and age-dependent (e.g., ~6 for adults) [42] [41]. |
| Partial Pathlength Factor | PPF |
Combined factor (DPF * PVF) accounting for scattering and the fraction of path in brain tissue. |
Used in some advanced models [41]. |
Table 2: Frequency ranges of common physiological noise sources in fNIRS signals.
| Noise Source | Frequency Range | Notes |
|---|---|---|
| Heartbeat | ~1 - 2 Hz | Can be suppressed by low-pass filtering [41]. |
| Respiration | ~0.4 Hz | Can be suppressed by low-pass filtering [41]. |
| Mayer Waves (Blood Pressure) | ~0.1 Hz | Very close to the signal of interest; may require careful filtering or source separation [41]. |
| Hemodynamic Response | < 0.1 Hz | The target signal for functional brain activation studies [41]. |
Table 3: A selection of key software tools for fNIRS data preprocessing and analysis.
| Tool Name | Primary Function | Key Feature | URL/Location |
|---|---|---|---|
| HOMER2 / HOMER3 | Comprehensive fNIRS analysis | GUI and scripting; extensive processing stream including MBLL, motion correction, and filtering. | homer-fnirs.org [44] |
| MNE-Python | Multimodal neuroimaging (EEG/MEG/fNIRS) | Python-based; integrates fNIRS preprocessing with EEG analysis, ideal for fusion research. | mne.tools [22] [44] |
| NIRSLab | Complete fNIRS data analysis | Modules for registration, preprocessing, 3D projection, and GLM analysis. | nirs-lab.com [44] |
| NIRS-SPM | Statistical parametric mapping | SPM-based toolbox for statistical analysis of fNIRS signals. | bisp.kaist.ac.kr [44] |
| fnirsSOFT (BIOPAC) | Process, analyze, and visualize fNIRS | Stand-alone software with a graphical user interface. | nirx.net/fnirssoft [44] |
A statistically robust re-referencing procedure is crucial for mitigating the effect of reference electrode activity, which can contaminate all EEG channels. The common average reference (CAR) is widely used but can be biased by neural activity present at the reference site. A robust maximum-likelihood type estimator can be adapted to mitigate this issue.
Methodology for Robust Re-referencing:
dt,k at channel k and time t as: dt,k = st,k - rt + nt,k, where st,k is the ideal silent-reference signal, rt is the unknown reference voltage, and nt,k is sensor noise [45].řt. This approach reduces the influence of channels with high-amplitude neural activity, which act as outliers in the reference estimation [45].št,k = mt,k + řt [45].This procedure is simple, fast, and avoids the substantial bias that can occur with traditional methods like CAR, especially when working with low-density EEG setups [45].
Filtering is essential for removing unwanted biological and line noise artifacts from the EEG signal. The table below summarizes standard parameters for a basic preprocessing pipeline, with examples from recent research.
Table 1: Standard EEG Filtering Parameters and Applications
| Filter Type | Standard Frequency Bands | Purpose | Example from Literature | Key Considerations |
|---|---|---|---|---|
| High-Pass Filter | ≥ 0.5 Hz or 1 Hz [46] [47] | Removes slow drifts and DC offset; improves ICA decomposition quality [46]. | A 1 Hz high-pass filter is recommended before ICA [46]. | Overly aggressive high-pass filtering (e.g., >0.5 Hz) can distort ERPs [47]. |
| Low-Pass Filter | ≤ 30 Hz to 40 Hz [35] [48] | Attenuates high-frequency muscle noise and other high-frequency artifacts. | A 40 Hz low-pass filter was used in an etomidate study to focus on classic EEG rhythms [35]. | The cutoff should be above the highest frequency of interest for your analysis. |
| Band-Stop (Notch) Filter | 50 Hz or 60 Hz (region-dependent) | Removes mains line noise. | A 50 Hz notch filter was applied in a visual evoked potential study [48]. | As an alternative, consider adaptive methods like the CleanLine plugin for line noise removal [46]. |
| Band-Pass for Rhythms | δ (1-3 Hz), θ (3-8 Hz), α (8-13 Hz), β (13-30 Hz), γ (30-40 Hz) [35] | Isolates specific neural oscillatory rhythms for analysis. | Rhythms were extracted using band-pass filters in a neurovascular coupling study [35]. | Use basic FIR filters and avoid causal filters if phase preservation is critical [35] [46]. |
Experimental Protocol: A typical filtering sequence for continuous data, as implemented in EEGLAB, involves:
Epoch extraction involves segmenting the continuous EEG signal into time-locked windows around events of interest. The key is to balance sufficient baseline and post-stimulus periods while managing data dimensionality.
Detailed Methodology for Epoch Extraction:
Table 2: Common ERP Visualization Types and Their Uses
| Plot Type | Dimensions Visualized | Best Use Case | Common Tools / Functions |
|---|---|---|---|
| ERP Plot | Time (sliced), Condition (sliced) | Showing the amplitude time-course of a specific component at a selected channel. | Standard in all ERP toolboxes. |
| Butterfly Plot | Sensors (all), Time (sliced) | Overview of all channel activities for a single condition; identifying widespread artifacts. | epochs.plot() in MNE-Python. |
| Topoplot | Sensors (all, spatial), Time (sliced) | Visualizing the spatial distribution of voltage at a specific latency. | topoplot in EEGLAB, plot_topomap in MNE. |
| Channel Image | Time, Sensors | Depicting trial-by-trial and channel-by-channel activity as a heatmap; useful for identifying consistent patterns. | Used in the LIMO toolbox [49]. |
Inconsistencies often arise from subtle differences in parameter choices, software tools, or the order of operations.
Troubleshooting Guide:
Problem: Inconsistent PSDs or peaks after filtering.
Problem: Poor baseline in epoched data after high-pass filtering.
Problem: Low classification accuracy or signal quality after artifact removal.
Integrating fNIRS and EEG requires specific hardware and software to enable synchronous data acquisition and analysis.
Table 3: Research Reagent Solutions for fNIRS-EEG Fusion
| Item | Function / Description | Example from Literature |
|---|---|---|
| EEG Acquisition System | Records electrical activity from the scalp with high temporal resolution. | 64-channel EEG system (e.g., NeuSen W) with sampling frequency ≥ 250 Hz [35]. |
| fNIRS Acquisition System | Measures hemodynamic changes by detecting near-infrared light attenuation, providing spatial resolution. | System with multiple emitters and detectors (e.g., 23 emitters, 16 detectors) using dual-wavelength lasers (730 & 850 nm) [35]. |
| Integrated Probe Cap | A custom helmet or cap that holds both EEG electrodes and fNIRS optodes in a co-registered spatial arrangement. | Flexible EEG cap with punctures for fNIRS fixtures; 3D-printed custom helmet for better fit and stability [32]. |
| Synchronization Hardware/Software | Ensures precise temporal alignment of EEG and fNIRS data streams. | Unified processor for simultaneous acquisition; or synchronization of separate systems via host computer [32]. |
| Software for Multimodal Analysis | Tools for preprocessing, feature fusion, and joint analysis of the two data modalities. | Custom scripts in MATLAB/Python; toolboxes like EEGLAB for EEG and Homer2 for fNIRS [35] [32]. |
Figure 1: A standardized preprocessing pipeline for EEG data within an fNIRS-EEG fusion framework. Dashed lines indicate the integration point for synchronized fNIRS data prior to multimodal analysis.
In functional near-infrared spectroscopy (fNIRS) and electroencephalography (EEG) fusion research, artifacts represent non-neural signals that can significantly corrupt data quality and interpretation. These unwanted signals originate from multiple sources: motion artifacts from subject movement, physiological artifacts from cardiac pulsation, respiration, and blood pressure changes, and environmental artifacts from instrumental and external interference [12] [8]. Effective artifact handling is particularly crucial in fNIRS-EEG studies because both modalities are susceptible to different artifact types with distinct characteristics, and the fusion process can amplify artifacts if not properly addressed [8] [51]. The portability of fNIRS and EEG systems enables brain monitoring in naturalistic settings, but this advantage comes with increased vulnerability to artifacts, making robust preprocessing pipelines essential for data validity [8] [52].
Motion artifacts (MAs) arise from imperfect contact between sensors and the scalp during subject movement. In fNIRS, optode displacement causes sudden, high-amplitude signal shifts due to changes in light coupling efficiency [37]. Specific movements causing MAs include head movements (nodding, shaking, tilting), facial muscle movements (raising eyebrows), body movements (limb movements causing head motion), and jaw movements (talking, eating) [37]. In EEG, motion creates similar signal disruptions but manifests as electrical potential changes from electrode movement relative to skin [52] [36]. Motion artifacts typically exhibit higher amplitude and different frequency characteristics compared to the underlying neural signals in both modalities.
Multiple algorithmic approaches exist for motion artifact correction, each with distinct advantages:
Table 1: Motion Artifact Correction Methods for fNIRS and EEG
| Method | Modality | Principle | Performance Metrics | Limitations |
|---|---|---|---|---|
| Wavelet Packet Decomposition (WPD) | fNIRS & EEG | Signal decomposition into wavelet packets with artifact component identification and removal [36] | ΔSNR: 29.44 dB (EEG), 16.11 dB (fNIRS); η: 53.48% (EEG), 26.40% (fNIRS) [36] | Wavelet selection affects performance |
| WPD with Canonical Correlation Analysis (WPD-CCA) | fNIRS & EEG | Two-stage approach: WPD followed by CCA for enhanced artifact separation [36] | ΔSNR: 30.76 dB (EEG), 16.55 dB (fNIRS); η: 59.51% (EEG), 41.40% (fNIRS) [36] | Increased computational complexity |
| Accelerometer-Based Methods (ABAMAR/ABMARA) | fNIRS | Use accelerometer data as noise reference for adaptive filtering [37] | Enables real-time rejection; improves classification accuracy [37] | Requires additional hardware; placement affects performance |
| Moving Average & Spline Interpolation | fNIRS | Identifies artifact periods and interpolates using clean data segments [37] | Simple implementation; effective for isolated artifacts [37] | Can distort signal morphology near artifacts |
The following workflow illustrates a systematic approach to motion artifact management:
Motion Artifact Management Workflow
Physiological artifacts originate from various bodily functions with distinct manifestations in fNIRS and EEG:
Table 2: Physiological Artifacts in fNIRS and EEG
| Source | fNIRS Manifestation | EEG Manifestation | Frequency Characteristics |
|---|---|---|---|
| Cardiac Pulsation | Low-frequency oscillations from blood volume changes | Electrical spikes from heart muscle activity (ECG) [8] | fNIRS: ~1-1.5 Hz; EEG: ~1-1.5 Hz with sharper peaks |
| Respiration | Slow oscillations from blood pressure and volume changes | Minimal direct impact | fNIRS: ~0.2-0.3 Hz; EEG: Less prominent |
| Mayer Waves | Very low-frequency oscillations from blood pressure regulation | Not typically detectable | fNIRS: ~0.1 Hz; EEG: Not applicable |
| Blood Pressure Changes | Systemic hemodynamic fluctuations | Not typically detectable | fNIRS: <0.1 Hz; EEG: Not applicable |
The fundamental difference lies in how these physiological processes affect each modality: fNIRS captures the hemodynamic consequences (blood volume/oxygenation changes), while EEG records the bioelectrical activity directly [8]. This distinction is crucial for designing effective artifact removal strategies.
Frequency-domain filtering is the primary approach for physiological artifact removal:
For optimal results, filter selection should consider both the artifact characteristics and the target neural signal properties. Finite impulse response (FIR) filters are generally preferred over infinite impulse response (IIR) filters for neurophysiological data due to their linear phase characteristics and stability [12].
Environmental artifacts stem from external sources rather than subject physiology:
These artifacts are particularly problematic in naturalistic settings where laboratory-level environmental control is impossible [8] [52]. The move toward wearable fNIRS and EEG systems for ecological monitoring has amplified the significance of environmental artifact management.
A multi-layered approach is most effective for environmental artifact management:
Purpose: To effectively remove motion artifacts from single-channel fNIRS and EEG signals using the WPD-CCA method [36].
Materials: Raw fNIRS/EEG data with motion artifacts, signal processing software (MATLAB, Python), WPD-CCA implementation.
Procedure:
Expected Outcomes: Significant improvement in signal-to-noise ratio (average ΔSNR: 30.76 dB for EEG, 16.55 dB for fNIRS) and motion artifact reduction (average η: 59.51% for EEG, 41.40% for fNIRS) [36].
Purpose: To remove physiological noise (cardiac, respiratory, Mayer waves) from fNIRS signals using frequency-domain filtering [12].
Materials: Raw fNIRS data, digital filter implementation (FIR/IIR), spectral analysis tool.
Procedure:
Expected Outcomes: Cleaned fNIRS signals with preserved task-related hemodynamic responses and significantly reduced physiological noise components.
Table 3: Essential Materials for fNIRS-EEG Artifact Handling Research
| Item | Function | Application Notes |
|---|---|---|
| Accelerometers | Motion detection and reference signal for artifact correction [37] | Place near optodes/electrodes; sample rate ≥100 Hz |
| Inertial Measurement Units (IMUs) | Multi-axis motion detection (acceleration, rotation) [37] | Provides comprehensive movement data for motion artifact correction |
| Short-Separation fNIRS Detectors | Measures superficial signals for global noise regression [8] | Place 0.8-1.5 cm from source; critical for separating cerebral and extracerebral signals |
| Electrode/Gel Compatibility Testing Kit | Ensures optimal electrode-skin interface for EEG [52] | Reduces impedance-related artifacts; essential for wearable EEG |
| Optode Stabilization Systems | Minimizes optode movement relative to scalp [37] | Headbands, custom caps; critical for motion-prone paradigms |
| Optical Shielding Materials | Prevents ambient light contamination in fNIRS [37] | Black cloth, opaque caps; essential for valid fNIRS measurements |
| Reference Noise Recording Electrodes | Records environmental noise for adaptive filtering [52] | Place away from scalp; provides noise reference for advanced filtering |
The choice depends on artifact characteristics and research objectives. Wavelet-based methods (WPD, WPD-CCA) are particularly effective for non-stationary, transient artifacts like motion artifacts, as they provide both time and frequency information [36]. Filter-based approaches are more suitable for periodic, stationary physiological artifacts with consistent frequency characteristics (cardiac, respiratory noise) [12]. For comprehensive artifact handling, a combined approach often yields best results: wavelet methods for motion artifacts followed by frequency filtering for physiological noise.
The optimal metrics depend on data availability and artifact type:
Methodological variability in artifact handling stems from multiple factors: different parameter selections in algorithms, varying quality thresholds for data inclusion, diverse approaches to handling borderline cases, and researcher experience levels [53]. The FRESH initiative found that nearly 80% of research teams agreed on group-level results when hypotheses were strongly literature-supported, but individual-level analyses showed greater variability [53]. Teams with higher self-reported analysis confidence (correlated with fNIRS experience) demonstrated greater inter-team agreement, highlighting the importance of methodological expertise and standardized reporting.
Real-time artifact handling requires balancing computational efficiency with effectiveness:
The following diagram illustrates the relationship between various artifacts and corresponding correction methods:
Artifact-Method Relationship Diagram
The integration of Electroencephalography (EEG) and functional Near-Infrared Spectroscopy (fNIRS) has emerged as a powerful approach in brain-computer interface (BCI) and cognitive neuroscience research. EEG measures neuronal electrical activity with millisecond temporal resolution, while fNIRS measures hemodynamic responses with better spatial localization [8]. These complementary properties make them ideal for multimodal fusion, which can be strategically implemented at three primary stages: early, middle, and late fusion. Understanding these approaches is essential for designing effective data preprocessing pipelines in fNIRS-EEG research.
Early-stage fusion involves combining raw or minimally processed data from both modalities before feature extraction. This approach preserves the richest information but requires handling different temporal resolutions and dimensionalities [54] [9].
Middle-stage fusion (also called feature-level fusion) integrates extracted features from each modality before classification. This allows for specialized processing for each signal type while capturing cross-modal relationships [55] [56].
Late-stage fusion (decision-level fusion) processes each modality through separate pipelines and combines the results at the decision level. This provides robustness when modalities have different reliability patterns but may underutilize complementary information [24] [57].
The Y-shaped neural network architecture provides a validated protocol for early fusion implementation [54]:
Data Preprocessing: Downsample EEG to 128 Hz, apply band-pass filter (8-25 Hz), and select 8 electrodes around the sensorimotor cortex. Normalize fNIRS HbO/HbR signals and temporally align with EEG.
Network Architecture: Implement independent encoders for each modality in the initial layers, then merge the streams for joint processing.
Training Configuration: Use leave-one-out cross-validation with 29 participants performing left/right hand motor imagery tasks.
This approach achieved 76.21% classification accuracy, significantly outperforming middle and late fusion in the same study (N=57, p<0.05) [54].
For feature-level fusion in drug addiction detection [55]:
Modality-Specific Processing: Use Tception module for EEG temporal features and Sception module for fNIRS spatial features.
Attention Mechanisms: Incorporate separate attention modules to weight informative features and reduce redundancy.
Feature Integration: Concatenate attended features from both modalities before the final classification layer.
This protocol achieved 92.6% accuracy in classifying healthy individuals versus those with drug addiction using six-fold cross-validation [55].
For decision-level integration in motor imagery classification [57]:
Independent Processing Paths:
Uncertainty Modeling: Quantify decisions using Dirichlet distribution parameter estimation
Evidence Fusion: Apply Dempster-Shafer Theory for two-layer reasoning with basic belief assignment
This method achieved 83.26% accuracy on motor imagery tasks, a 3.78% improvement over previous benchmarks [57].
Table 1: Comparative Performance of Fusion Strategies Across Applications
| Fusion Approach | Application Domain | Classification Accuracy | Key Advantages |
|---|---|---|---|
| Early Fusion [54] | Motor Imagery | 76.21% | Maximizes complementary information utilization |
| Middle Fusion [55] | Drug Addiction Detection | 92.6% | Balanced processing with attention mechanisms |
| Late Fusion [57] | Motor Imagery | 83.26% | Robust to modality-specific noise and artifacts |
| DeepSyncNet (Early) [9] | Motor Imagery/Mental Arithmetic | Superior to late fusion | Effective cross-modal interaction |
| MBC-ATT (Late) [24] | Cognitive Task (n-back) | Competitive performance | Dynamic dependency modeling |
Issue: Poor Fusion Performance Despite High Single-Modality Accuracy
Symptoms: Combined model performs worse than individual modalities; validation loss fluctuates excessively.
Diagnosis: Temporal misalignment between EEG and fNIRS signals, or inadequate cross-modal feature interaction.
Solutions:
Issue: Model Overfitting with Limited Labeled Data
Symptoms: High training accuracy with poor test performance; frequent oscillations in test loss.
Diagnosis: Insufficient regularization for high-dimensional multimodal features.
Solutions:
Issue: Handling Different Temporal resolutions
Symptoms: Information loss; failure to capture complementary timing relationships.
Diagnosis: Inadequate temporal alignment strategy between fast EEG and slow fNIRS responses.
Solutions:
Q: When should I choose early fusion over late fusion?
A: Early fusion is preferable when you have high-quality, temporally aligned data and want to maximize information exchange between modalities. It's particularly effective for learning complex cross-modal relationships, as demonstrated in motor imagery tasks where it outperformed other approaches [54] [9]. Late fusion is better when modalities have different reliability patterns or when computational efficiency is prioritized.
Q: How can I address the spatial mismatch between EEG and fNIRS?
A: Implement coordinate mapping to common reference space (e.g., MNI coordinates) as shown in Table 1 of the error-related brain state study [58]. For deep learning approaches, convert 1D signals to 3D tensors based on sensor positions to align spatial representations before fusion [9].
Q: What attention mechanisms work best for EEG-fNIRS fusion?
A: Cross-modal attention with residual integration effectively balances modality contributions [9]. Modality-guided attention selectively emphasizes relevant features based on task context [24]. For feature-level fusion, separate attention mechanisms for each modality (like in AR-TSNET) reduce redundant features [55].
Table 2: Essential Materials and Computational Tools for EEG-fNIRS Fusion Research
| Resource Type | Specific Tool/Method | Function/Purpose |
|---|---|---|
| Public Datasets | Shin et al. Dataset [54] | Motor imagery and mental arithmetic tasks with simultaneous EEG-fNIRS |
| Deep Learning Frameworks | Y-shaped Network Architecture [54] | Early fusion implementation with modality-specific encoders |
| Fusion Algorithms | Attentional Fusion (AF) [9] | Adaptive integration of EEG and fNIRS features |
| Pre-trained Models | EFRM [56] | Transfer learning for limited data scenarios |
| Signal Processing | Regularized Canonical Correlation Analysis [58] | Joint analysis of EEG band power and HbO changes |
Diagram 1: Early fusion workflow with raw data integration
Diagram 2: Late fusion with decision-level integration
The selection of appropriate fusion strategies fundamentally shapes the effectiveness of EEG-fNIRS integration in research pipelines. Early fusion maximizes information exchange but demands careful temporal alignment. Middle fusion balances specialized processing with cross-modal learning, while late fusion offers robustness against modality-specific artifacts. Current evidence suggests early fusion generally provides superior performance for motor imagery and cognitive tasks, though optimal implementation requires attention to neurovascular coupling principles and modality-specific characteristics. As multimodal research advances, emerging techniques like cross-modal attention and representation learning will further enhance our ability to leverage the complementary strengths of EEG and fNIRS.
This technical support center addresses common challenges in fNIRS-EEG fusion research, framed within a broader thesis on data preprocessing pipelines.
Answer: The choice of fusion stage involves a trade-off between model performance and computational complexity. Recent evidence suggests that for tasks like motor imagery, early-stage fusion can yield superior classification accuracy.
N = 57, P < 0.05), achieving an average accuracy of 76.21% [59].Table 1: Comparison of EEG-fNIRS Fusion Strategies
| Fusion Stage | Description | Key Advantage | Reported Challenge |
|---|---|---|---|
| Early Fusion | Combining raw or pre-processed signals before feature extraction [59]. | Higher performance in some tasks (e.g., motor imagery) [59]. | Requires careful handling of temporal and spatial misalignment [60]. |
| Late Fusion | Combining decisions or high-level features from unimodal classifiers [24]. | Leverages modality-specific expertise; more flexible. | May fail to exploit deep cross-modal correlations [24]. |
| Cross-Modal Attention | A middle-stage fusion that dynamically weights features based on inter-modal relationships [24] [60]. | Dynamically focuses on relevant signals and modalities; improves interpretability [24]. | Increases model complexity and requires more data for training. |
Answer: This is a fundamental challenge, as EEG captures millisecond-level electrical activity while fNIRS measures hemodynamic responses over seconds. Advanced deep learning models are designed to explicitly address this.
Answer: Poor probe contact is a major source of noise. Solutions span both hardware design and signal processing.
Answer: Ensuring model interpretability and biological plausibility is crucial for scientific validity.
This protocol is based on the Multimodal MBC-ATT model for cognitive state decoding [24].
Diagram 1: MBC-ATT Fusion Workflow
This protocol outlines the TSMMF model for cross-subject emotion recognition [60].
Diagram 2: BCMT Fusion Process
Table 2: Essential Components for an EEG-fNIRS Fusion Research Pipeline
| Item / Reagent | Function & Explanation | Technical Specifications / Notes |
|---|---|---|
| Simultaneous fNIRS-EEG System | Core hardware for data acquisition. Integration is key to avoid synchronization issues. | Prefer systems with a unified processor for acquisition [32]. Ensure compatibility between EEG amplifiers and fNIRS hardware. |
| Custom Hybrid Helmet | Ensures stable and consistent placement of EEG electrodes and fNIRS optodes. | 3D-printed or thermoplastic custom helmets are superior to modified elastic caps for probe stability [32]. |
| Short-Separation fNIRS Channels | Critical "reagent" for signal preprocessing. Used to regress out scalp hemodynamics and improve brain signal specificity [61]. | Optimal source-detector distance: ~8.4 mm for adults. If physically unavailable, a transformer-based virtual short-channel generator can be used [61]. |
| Public Multimodal Datasets | Serves as a benchmark and training resource for developing and validating new algorithms. | Examples include the dataset by Shin et al. for motor imagery and mental arithmetic [59], and others used for n-back and word generation tasks [24]. |
| Structured Sparse Multiset CCA (ssmCCA) | A data fusion analysis "reagent" used to identify latent variables that are maximally correlated across EEG and fNIRS modalities. | Useful for identifying brain regions consistently activated in both electrical and hemodynamic domains (e.g., the left inferior parietal lobe during motor tasks) [63]. |
Q1: My MNE-Python installer fails with an error on macOS. What should I do?
A common issue, particularly on macOS, is that the installer may show an error message, yet the installation can still be successful. After running the installer, verify the installation by opening a prompt and executing python -c "import mne; mne.sys_info()". This command will generate a report detailing the versions of MNE-Python and all optional dependencies. As long as this report shows no errors and lists mne as installed, your installation is likely functional despite the installer error [64]. If problems persist, try using the latest version of the installer.
Q2: How can I handle errors gracefully in an automated MNE-BIDS pipeline?
When running automated processing with the MNE-BIDS-Pipeline, you can control its behavior upon encountering an error via the on_error configuration setting. This can be set to:
'abort': Stop processing immediately (default) [65].'continue': Attempt to continue with other processing steps [65].'debug': Drop into a debugger to investigate the error (note: this deactivates parallel processing) [65].Q3: What is a typical fNIRS preprocessing workflow in MNE-Python? A standard pipeline to convert raw fNIRS data to analyzable haemoglobin concentrations involves several key stages [22] [66]:
Raw object.optical_density.beer_lambert_law.Q4: How does BrainFusion support multimodal data fusion, and how does it compare to MNE-Python? BrainFusion is a unified, low-code framework designed specifically to simplify the complexity of multimodal data analysis. It provides standardized data containers and automated pipelines for integrating EEG, fNIRS, EMG, and ECG signals. Its key advantage for multimodal research is its focus on cross-modal feature engineering, coupling analysis, and an application generator that allows you to export workflows as standalone executable tools [67]. While MNE-Python is a powerful script-based toolkit that can analyze EEG and fNIRS in a unified environment, BrainFusion aims to make these advanced analyses more accessible and deployable with less coding required [67].
Problem: Processed fNIRS data contains unexpected noise, making it difficult to observe the haemodynamic response.
Solutions:
raw_intensity.plot() and mark consistently poor-quality channels as "bad" [22] [66].reject_criteria = dict(hbo=80e-6)) [22].Problem: The mne.Epochs function throws an error, or the log shows that a large number of epochs were dropped.
Solutions:
mne.viz.plot_events [66].reject parameter and adjust the tolerance (e.g., 80e-6 for HbO) based on your signal's amplitude [22].reject_by_annotation=True to automatically exclude data segments you have manually marked as bad during raw data inspection [22] [68].tmin, tmax) is defined correctly and does not extend beyond the available data, especially for trials near the start or end of the recording.Problem: Difficulty in synchronizing, aligning, and jointly analyzing fNIRS and EEG data streams.
Solutions:
This protocol outlines the key steps for converting raw fNIRS data to haemoglobin changes and extracting event-related responses [22] [66].
1. Data Loading and Inspection:
mne.io.read_raw_nirx() or BIDS-compatible functions to load data.raw_intensity.plot() to identify obvious artifacts or bad channels.2. Pre-processing:
raw_od = mne.preprocessing.nirs.optical_density(raw_intensity).raw_haemo = mne.preprocessing.nirs.beer_lambert_law(raw_od, ppf=0.1).raw_haemo.filter(0.05, 0.7, ...).3. Epoching and Averaging:
events, event_dict = events_from_annotations(raw_haemo).epochs = mne.Epochs(raw_haemo, events, tmin=-5, tmax=15, baseline=(None, 0), reject=reject_criteria).evoked = epochs['Condition'].average().The following diagram illustrates this workflow.
This protocol describes a high-level workflow for conducting a joint analysis of EEG and fNIRS data within the BrainFusion environment for a motor imagery task [67].
1. Data Ingestion and Standardization:
2. Automated Pre-processing:
3. Feature Engineering and Modeling:
4. Deployment:
The workflow for this multimodal fusion is shown below.
The tables below summarize critical parameters and software tools used in typical fNIRS and multimodal experiments.
Table 1: Key Pre-processing Parameters in MNE-Python for fNIRS
| Parameter | Typical Value / Function | Purpose |
|---|---|---|
ppf (Partial pathlength factor) |
0.1 [22] or 6 [66] | Factor for modified Beer-Lambert law conversion. |
reject (Epoch rejection) |
dict(hbo=80e-6) [22] |
Threshold for automatic rejection of noisy epochs. |
filter (Band-pass) |
0.05 - 0.7 Hz [22] | Removes cardiac noise (high-pass) and slow drifts (low-pass). |
baseline |
(None, 0) [22] |
Defines the time period for baseline correction of epochs. |
source_detector_dist |
> 0.01 m (1 cm) [22] | Minimum distance to pick channels sensitive to brain activity. |
Table 2: Essential Software Tools for the Research Pipeline
| Tool / "Reagent" | Function | Application Context |
|---|---|---|
| MNE-Python | A powerful, script-based toolbox for electrophysiology and fNIRS data analysis. | Core processing of individual modalities (EEG or fNIRS); building custom analysis pipelines [22] [69]. |
| BrainFusion | A low-code, unified framework for multimodal BCI and brain-body interaction research. | Simplifying EEG-fNIRS-ECG-EMG fusion; feature engineering; deploying models as executables [67]. |
| Cedalion | An open-source Python toolbox for fNIRS and DOT analysis, supporting image reconstruction. | An alternative for channel- and image-space analysis, GLM, and optode co-registration [70]. |
| NeuroDOT | A MATLAB-based software package for diffuse optical tomography and image reconstruction. | Volumetric imaging of functional brain activations from fNIRS data [70]. |
| BIDS Format | A standardized file system and metadata structure for brain data. | Ensuring reproducibility and simplifying data sharing and reading across tools like MNE-BIDS and BrainFusion [67] [68]. |
The fusion of functional near-infrared spectroscopy (fNIRS) and electroencephalography (EEG) data presents a significant dimensionality challenge for researchers. This multimodality integration results in high-dimensional data spaces with a vast number of potential features extracted from both electrical and hemodynamic responses [8]. The curse of dimensionality is particularly severe in neuroimaging data, where large sets of potential neural features (e.g., responses from voxels, electrodes, temporal windows, and frequency bands) are often recorded across a limited set of stimuli and samples [71]. Without proper regularization and feature selection techniques, researchers risk developing models that overfit the training data and fail to generalize to new datasets, ultimately compromising the validity and reliability of study findings.
This technical guide addresses the most common dimensionality-related challenges encountered in fNIRS-EEG research pipelines and provides evidence-based solutions to enhance the quality of multimodal data analysis.
Answer: High dimensionality without proper regularization negatively impacts analysis in several measurable ways:
Answer: The sample-to-feature ratio is a critical consideration. These warning signs indicate insufficient samples:
As a general guideline, the number of samples should substantially exceed the number of features, with some studies recommending ratios of 10:1 or higher for stable model estimation [71].
Answer: Research indicates several effective approaches:
Table 1: Feature Selection Methods for fNIRS-EEG Data
| Method | Mechanism | Best Use Cases | Performance Evidence |
|---|---|---|---|
| Mutual Information-Based Selection | Maximizes relevance and minimizes redundancy between features | Optimal for identifying complementary features across modalities [72] | Up to 5% improvement in hybrid classification accuracy [72] |
| Correlation Stability | Ranks features based on stability across stimulus repetitions | Establishing reliable neural signatures for stimulus classification [71] | Successfully applied across multiple semantic representation studies [71] |
| Attribute/Feature Correlation | Selects features based on correlation with semantic attributes | Zero-shot learning and neural-semantic mapping [71] | Achieves similar accuracy with far fewer features than stability methods [71] |
| Wrapper Methods with Search Optimization | Uses prediction performance to guide feature selection | Maximizing classification accuracy for specific tasks [74] | Achieved >96% accuracy in MI and MA tasks with optimized feature subsets [74] |
Answer: The choice of regularization depends on your analysis goals:
Answer: These techniques should be implemented as sequential steps in your pipeline:
Research shows that focusing on highly informative features before model training enhances performance significantly. One study found that utilizing only the most informative features revealed differential encoding patterns, with accelerometry jerk primarily decoded through local spectral power while bar press rate was decoded via inter-regional connectivity [73].
This protocol follows the methodology demonstrated to improve EEG-fNIRS classification performance by optimizing complementarity, redundancy, and relevance between multimodal features [72].
Materials Needed:
Step-by-Step Procedure:
Feature Extraction:
Mutual Information Calculation:
Feature Subset Selection:
Classification Performance Evaluation:
Table 2: Research Reagent Solutions for fNIRS-EEG Fusion Studies
| Reagent/Resource | Function/Purpose | Example Application | Implementation Notes |
|---|---|---|---|
| Modified Beer-Lambert Law (mBLL) | Converts raw fNIRS intensity to hemoglobin concentrations | Deriving relative HbO and HbR concentration changes [18] [75] | Requires appropriate pathlength factor (PPF) correction [18] |
| Scalp Coupling Index (SCI) | Quantifies optode-scalp coupling quality | Identifying and rejecting poor-quality fNIRS channels [18] | Channels with SCI <0.5 are typically marked as bad [18] |
| Motion Artifact Correction Algorithms | Identifies and removes motion-induced signal components | Improving signal quality in movement-prone experiments | Multiple methods available; must be documented transparently [62] |
| Joint Independent Component Analysis (jICA) | Projects multimodal features to new feature space | Mental stress detection from fused EEG-fNIRS [74] | Can improve detection rates (91%→98% in one study) [74] |
| Atomic Search Optimization | Nature-inspired algorithm for feature selection | Identifying optimal feature subsets in high-dimensional data [74] | Part of a larger multi-level progressive learning framework [74] |
This protocol implements the ridge regression-based encoding model effective for zero-shot learning applications, which can be adapted for various fNIRS-EEG fusion contexts [71].
Procedure:
Data Preparation:
Model Training:
Regularization Optimization:
Model Validation:
Dimensionality Reduction Pipeline for fNIRS-EEG Data
Multimodal Fusion Strategies for EEG-fNIRS
Table 3: Software Tools for fNIRS-EEG Dimensionality Analysis
| Tool/Platform | Primary Function | Dimensionality Features | Implementation Considerations |
|---|---|---|---|
| MNE-Python | Full fNIRS and EEG processing pipeline | Signal filtering, HRF estimation, feature extraction | Provides comprehensive tutorial for fNIRS processing [18] |
| FieldTrip | Preprocessing and averaging of NIRS data | Optical density conversion, motion artifact handling | Supports single-channel and multi-channel analysis [75] |
| LightGBM | Gradient boosting framework for decoding | Built-in feature selection and regularization | Demonstrated <110 ms training time for neural decoding [73] |
| Custom MATLAB/Python Scripts | Implementing specialized algorithms | Mutual information calculation, custom regularization | Flexibility for research-specific dimensionality needs [72] [71] |
Functional Near-Infrared Spectroscopy (fNIRS) signals contain components originating from both neurovascular coupling and systemic physiological sources, creating a significant challenge for accurate data interpretation. The fNIRS signal comprises six components: neuronal evoked changes in the cerebral compartment (the signal of interest), systemic evoked changes in the cerebral compartment, systemic evoked changes in the extracerebral compartment, vascular evoked changes in both cerebral and extracerebral compartments, and muscular evoked changes in the extracerebral compartment [76]. Systemic physiological confounders—including cardiorespiratory activity (heart rate, respiration), blood pressure changes (Mayer waves), and changes in arterial carbon dioxide concentration (PaCO2)—can mimic true hemodynamic responses, potentially causing false positives or masking genuine neuronal activity (false negatives) [76]. Proper identification and correction of these confounders is therefore essential for valid fNIRS research, particularly in fNIRS-EEG fusion studies where accurate hemodynamic information complements electrophysiological data.
Problem: Unexpected hemodynamic response morphology during rest or control conditions.
Root Cause: Systemic physiological changes can mimic the hemodynamic response function (HRF). For example, an increase in PaCO2 during a task can cause a large increase in HbO and a slight decrease in HbR, closely resembling a normal neurovascular response [76]. These systemic changes are not adequately filtered out by standard bandpass filters.
Solution: Implement Systemic Physiology Augmented fNIRS (SPA-fNIRS)
Problem: Artificially high correlation between fNIRS channels during resting-state measurements.
Root Cause: Systemic physiological noise (e.g., blood pressure oscillations, respiration) represents a global source of variance that synchronizes signals across channels, leading to spurious correlations [77]. Approximately 94% of the signal measured by a regular fNIRS channel (source-detector distances ~3 cm) reflects systemic hemodynamic changes from extracerebral tissue [77].
Solution:
Table 1: Comparison of Physiological Confounder Correction Methods
| Method | Key Principle | Effectiveness | Limitations |
|---|---|---|---|
| Short-Channel Regression | Uses <1cm separation channels to measure & regress out superficial signals | High for scalp contributions | Reduces number of available neural channels; requires specific hardware [77] |
| PCA/ICA | Removes components with highest variance (typically physiological) | High for global systemic noise | Number of components to remove is arbitrary; may remove neural signal [77] |
| SPA-fNIRS | Directly measures & regresses physiological variables (HR, MAP, CO2) | Comprehensive for multiple confounders | Requires additional equipment and synchronization [76] |
| Bandpass Filtering | Removes frequency content outside expected HRF range (e.g., >0.7 Hz) | Limited to specific frequency noise | Cannot remove confounders in same frequency band as HRF [12] [22] |
Problem: Lack of clear HRF shape, multiple signal dips, or inconsistent responses across subjects [78].
Root Cause: Variability in data quality, analysis pipelines, and researcher choices significantly impacts results. A recent reproducibility study found that nearly 80% of research teams agreed on group-level results only when hypotheses were strongly supported by literature, with agreement improving with better data quality and researcher experience [53].
Solution:
Problem: Variability in statistical significance and effect sizes when using different processing approaches.
Root Cause: fNIRS analysis involves multiple stages with numerous valid choices at each step, including data selection criteria, preprocessing options, region of interest selection, and statistical modeling [53]. Teams with higher self-reported analysis confidence (correlated with fNIRS experience) showed greater agreement [53].
Solution:
Purpose: To simultaneously measure systemic physiological variables alongside fNIRS for complete confounder identification [76].
Materials:
Procedure:
Analysis:
Purpose: To obtain accurate resting-state networks by controlling for systemic physiological influences [77].
Materials: fNIRS system with short-separation channels, physiological monitors.
Procedure:
Processing Pipeline:
Table 2: Essential Research Reagents and Tools for fNIRS Confounder Management
| Item | Function | Example Products/Formats |
|---|---|---|
| Short-Separation Optodes | Measures superficial scalp hemodynamics for signal regression | Custom designs; integrated in commercial systems like NIRScout, NIRSport [77] |
| fNIRS Analysis Software | Data processing, visualization, and statistical analysis | HOMER3, NIRSLab, MNE-NIRS, NIRS-SPM, ICNNA [44] |
| Physiological Monitors | Measures heart rate, blood pressure, respiration, CO2 | ECG, finger PPG, capnograph, respiratory belt [76] |
| 3D Digitization System | Records precise optode locations for accurate spatial registration | Polhemus, Structure Sensor, photogrammetry systems [32] |
| Synchronization Hardware | Aligns fNIRS, EEG, and physiological data streams | Lab Streaming Layer (LSL), trigger boxes, network synchronization [32] |
| Custom Headgear | Maintains stable optode placement with integrated EEG | 3D-printed helmets, cryogenic thermoplastic sheets [32] |
The combination of fNIRS and EEG presents unique opportunities and challenges for confounder management. EEG provides excellent temporal resolution but suffers from poor spatial localization, while fNIRS offers better spatial resolution but is contaminated by systemic physiology [32] [27]. In fused systems:
Effective identification and correction of systemic physiological confounders is essential for robust fNIRS research, particularly in fused fNIRS-EEG studies. By implementing SPA-fNIRS approaches, incorporating short-channel measurements, and using appropriate processing pipelines, researchers can significantly improve the validity and reproducibility of their findings. Standardized protocols and comprehensive reporting of methodological choices will further enhance the reliability of fNIRS across the research community.
In electroencephalography (EEG) research, motion and muscle artifacts pose a significant challenge for data interpretation, particularly in naturalistic study designs and mobile brain-imaging scenarios. These artifacts introduce non-neural signals that can obscure genuine brain activity, complicating both unimodal analysis and the growing field of multimodal fNIRS-EEG fusion research [8]. Motion artifacts typically arise from head movements, electrode displacement, or cable sway, often producing high-amplitude, low-frequency signals that can mask event-related potentials. Muscle artifacts, primarily from jaw clenching, forehead tension, or neck strain, introduce high-frequency, non-stationary noise that contaminates the EEG spectrum [79] [80]. Effectively removing these contaminants is a critical preprocessing step to ensure the validity and reliability of neural signatures, especially when correlating electrical activity with the hemodynamic responses measured by fNIRS [8] [63].
Q1: What are the most effective automated techniques for removing motion artifacts from high-density EEG during running or walking?
For high-motion scenarios like running, Artifact Subspace Reconstruction (ASR) and iCanClean are currently the most effective automated techniques [80]. ASR uses a sliding-window principal component analysis (PCA) to identify and remove high-variance signal components that deviate significantly from a calibrated baseline period. A key parameter is the standard deviation cutoff (k), with values between 10-20 recommended for locomotion studies to avoid over-cleaning [80]. iCanClean leverages canonical correlation analysis (CCA) to identify and subtract noise subspaces that are highly correlated with pseudo-reference noise signals derived from the EEG data itself (e.g., very low-frequency content below 3 Hz). Studies show that using iCanClean with an R² threshold of 0.65 and a 4-second sliding window optimally preserves brain signals while removing gait-related motion artifacts [80].
Q2: How can I distinguish muscle artifact components from neural components in an Independent Component Analysis (ICA) decomposition?
Distinguishing muscle from neural components in ICA relies on spatial, temporal, and spectral heuristics [81]. The table below summarizes key distinguishing characteristics:
Table: Distinguishing ICA Components
| Feature | Muscle Artifact Component | Neural (Brain) Component |
|---|---|---|
| Spatial Topography | Focal projections over temporal, frontal, or neck muscles; non-dipolar [81]. | Spatially smooth, dipolar maps consistent with cortical generators [80]. |
| Spectral Power | High-frequency content (> 20 Hz); broad spectral profile [79] [81]. | Peak power in standard bands (Delta, Theta, Alpha, Beta); rhythmic activity [79]. |
| Time Course | Bursty, high-frequency, non-stationary activations [81]. | More continuous, oscillatory dynamics time-locked to tasks or events. |
Q3: My research involves simultaneous fNIRS-EEG. Will motion artifact removal in the EEG affect the fidelity of the fused data?
Robust artifact removal in EEG is essential for high-quality data fusion and generally does not negatively impact fused data fidelity if performed correctly [8] [63]. Since fNIRS and EEG capture different physiological signals (hemodynamic vs. electrical), they are susceptible to different artifact types and require separate, optimized preprocessing pipelines before fusion [8] [82]. Effectively cleaning the EEG ensures that the shared latent neural variables discovered through fusion algorithms like structured sparse multiset Canonical Correlation Analysis (ssmCCA) genuinely reflect coupled neurovascular activity, rather than being driven by residual EEG artifacts [63]. The complementary nature of the signals means proper cleaning enhances fusion outcomes [51].
Q4: Why does my ICA decomposition perform poorly on data from participants with excessive motion?
Excessive motion creates widespread, high-amplitude artifacts that violate ICA's core assumption of statistical independence among sources [80]. When artifacts dominate the signal, ICA cannot effectively separate brain activity from noise, leading to components that represent mixed sources. To address this, apply a robust preprocessing pipeline before ICA. This includes using ASR or iCanClean to reduce large motion artifacts, followed by bad channel detection and interpolation. This process "conditions" the data, allowing ICA to subsequently find more physiologically plausible and dipolar brain components [83] [80].
Q5: Are deep learning methods viable for muscle and motion artifact removal in experimental pipelines?
Yes, deep learning (DL) is an emerging and highly viable approach. Models like AnEEG (an LSTM-based Generative Adversarial Network) show promise in generating artifact-free EEG signals by learning complex, non-linear noise patterns from training data [84]. These models can be trained in a supervised manner to map noisy EEG inputs to clean outputs. The primary advantage is their ability to adapt to specific artifact types without requiring manual parameter tuning for each dataset. However, their effectiveness depends on large, diverse, and well-labeled training datasets, which can be a limitation for some experimental paradigms [84].
Problem: Expected Event-Related Potential (ERP) components (e.g., P300) are absent or severely attenuated after cleaning data from a mobile experiment.
Solution: This indicates potential over-cleaning, where neural signals of interest have been removed along with the artifacts.
k parameter might be set too low. Increase the k value (e.g., from 10 to 20 or 30) to make the algorithm less aggressive, thereby preserving more of the neural signal variance [80].Problem: After standard preprocessing and ICA, high-frequency muscle noise (EMG) remains visible in the temporal or frontal electrodes.
Solution: Muscle artifacts are often persistent and may require a targeted approach.
The following table provides a structured comparison of the primary artifact removal techniques discussed, based on recent literature.
Table: Comparison of Motion and Muscle Artifact Removal Techniques
| Technique | Underlying Principle | Best For | Key Parameters | Performance Metrics | Key Advantages | Key Limitations |
|---|---|---|---|---|---|---|
| ICA [79] [81] | Blind Source Separation (BSS) to isolate statistically independent sources. | Removing stereotyped artifacts: eye blinks, lateral eye movements, heart signals. | Number of components; heuristics for component rejection. | High component dipolarity indicates good separation [80]. | Does not require reference channels; provides intuitive component topographies. | Fails with high-amplitude, non-stereotyped motion; requires manual component inspection. |
| ASR [83] [80] | Real-time PCA to identify and remove high-variance components. | Continuous data with large, non-stereotyped motion artifacts (e.g., walking, running). | Standard deviation cutoff (k); calibration data. |
Reduces power at gait frequency & harmonics; improves ICA dipolarity [80]. | Fast, automated, good for mobile EEG; handles large-amplitude artifacts. | Risk of over-cleaning with low k; performance depends on quality of calibration data. |
| iCanClean [80] | Canonical Correlation Analysis (CCA) to subtract noise subspaces. | High-motion environments; ideal when dual-layer EEG sensors are available. | R² correlation threshold (e.g., 0.65); sliding window size. | Outperforms ASR in producing dipolar components & recovering ERPs during running [80]. | Highly effective for motion artifact; can use pseudo-reference signals. | Can be computationally intensive; requires parameter tuning. |
| Deep Learning (e.g., AnEEG) [84] | Trained neural network (e.g., GAN with LSTM) to map noisy EEG to clean EEG. | Scenarios with large, labeled datasets for training. | Model architecture; loss functions; training dataset size/quality. | Lower NMSE & RMSE; higher CC with ground truth signals [84]. | Can model complex, non-linear artifacts; minimal manual intervention after training. | Requires large, diverse training datasets; "black box" nature; potential for overfitting. |
This protocol is designed for experiments involving significant participant movement, such as walking or running [80].
cleanline function or similar to adaptively estimate and remove 50/60 Hz line noise and its harmonics.clean_artifacts function with a k parameter of 15 (start with this value and adjust based on data). The algorithm will use an initial clean segment of the data for calibration [80].runica algorithm) on the preprocessed data. The data is now suitable for ICA, which will effectively isolate remaining artifacts like eye blinks and residual muscle activity.The workflow for this protocol is visualized below.
This protocol focuses specifically on mitigating the effects of electromyogenic (EMG) contamination.
Table: Essential Tools for a Robust EEG Artifact Removal Pipeline
| Tool Name | Type | Primary Function | Application Context |
|---|---|---|---|
| EEGLAB [83] [81] | Software Environment | Interactive MATLAB toolbox for processing EEG data. | The foundational platform for most preprocessing workflows, including ICA analysis and plugin integration. |
| ICLabel | EEGLAB Plugin | Automated classification of ICA components into categories (Brain, Eye, Muscle, Heart, Line Noise, Channel Noise, Other). | Rapid, objective initial assessment of components to guide manual cleaning decisions [80]. |
| RELAX | EEGLAB Plugin | Pipeline for targeted artifact reduction, cleaning specific periods of eye components and frequencies of muscle components. | Prevents effect size inflation and source localization bias; ideal for Go/No-Go and N400 tasks [85]. |
| NEAR | EEGLAB Plugin | Newborn EEG Artifact Removal pipeline automating bad channel detection (via LOF) and ASR. | Specifically designed for the unique challenges of noisy, non-stereotyped artifacts in newborn and infant EEG [83]. |
| Artifacts Subspace Reconstruction (ASR) | Algorithm/Plugin | Removes high-amplitude, non-stereotyped artifacts from continuous data via PCA and calibration. | Essential preprocessing for mobile EEG studies involving walking, running, or significant movement [80]. |
| iCanClean | Algorithm/Software | Uses CCA and reference noise signals to detect and correct motion artifact subspaces in the EEG. | Superior motion removal in high-mobility studies, especially with dual-layer sensor setups [80]. |
| Structured Sparse Multiset CCA (ssmCCA) | Data Fusion Algorithm | Fuses multimodal data (e.g., fNIRS-EEG) to find shared latent variables. | Identifies brain regions with consistent activity across modalities after unimodal preprocessing [63]. |
Temporal alignment ensures that the electrophysiological activity captured by EEG and the hemodynamic responses measured by fNIRS are synchronized in time. Spatial alignment involves precisely co-registering EEG electrode locations with fNIRS optode positions on the scalp to enable accurate mapping of brain activity [32].
The neurovascular coupling mechanism forms the theoretical basis for this integration, where neural electrical activity (measured by EEG) is inherently accompanied by hemodynamic and metabolic responses (measured by fNIRS) [86]. This relationship allows researchers to study both the direct electrical neural activity and the indirect metabolic responses simultaneously.
Without proper alignment, researchers cannot confidently attribute signals to specific neural events or brain regions, potentially leading to erroneous conclusions about brain function and connectivity [87] [88].
Table 1: Consequences of Poor Alignment in fNIRS-EEG Research
| Alignment Type | Potential Research Impact | Data Quality Issues |
|---|---|---|
| Temporal Misalignment | Incorrect interpretation of neurovascular coupling dynamics; flawed event-related analysis | Cannot correlate fast EEG responses with slower fNIRS hemodynamic responses |
| Spatial Misalignment | Misattribution of brain activity to incorrect anatomical regions; reduced spatial accuracy | Poor co-registration between electrical and hemodynamic activity maps |
| Semantic Misalignment | Failure to identify meaningful cross-modal relationships; incomplete data interpretation | Inability to connect physiological patterns across modalities meaningfully |
Two main technical approaches exist for temporal synchronization:
Separate System Synchronization: fNIRS and EEG data are obtained using separate systems (e.g., NIRScout and BrainAMP) and synchronized during analysis via host computer software [32]. While simpler to implement, this method may lack the precision needed for microsecond-level EEG analysis.
Unified Processor Synchronization: A single processor simultaneously acquires and processes both EEG and fNIRS signals, achieving highly precise synchronization [32]. This method requires more complex system design but provides superior temporal alignment accuracy.
For post-acquisition temporal alignment, techniques include:
Spatial alignment requires precise co-registration of EEG electrodes and fNIRS optodes. Current helmet fusion approaches include:
Table 2: Spatial Alignment Hardware Configurations
| Helmet Design Approach | Advantages | Limitations & Considerations |
|---|---|---|
| Integrated EEG/fNIRS Cap [32] | Simple implementation; maintains probe coupling | Elastic fabric may cause variable source-detector distances; inconsistent scalp contact pressure |
| 3D-Printed Custom Helmet [32] | Perfect individual fit; flexible component positioning | Higher production costs; longer fabrication time |
| Cryogenic Thermoplastic Sheet [32] | Cost-effective; lightweight; customized shaping | Potential rigidity; may exert uncomfortable pressure on head |
| Flexible Cap with Modifications [32] | Utilizes existing equipment; relatively straightforward | Probe stability issues during movement; placement inconsistencies |
Critical steps for spatial alignment:
Temporal drift occurs when recording systems operate on independent clocks, causing gradually increasing misalignment.
Solution Protocol:
Poor scalp contact creates signal artifacts and reduces data quality in both modalities.
Troubleshooting Strategies:
EEG systems are susceptible to interference from fNIRS electronic components.
Mitigation Approaches:
The following protocol is adapted from a published study investigating neural activity during motor execution, observation, and imagery [63].
Research Goal: To examine shared and distinct neural mechanisms of motor execution (ME), motor observation (MO), and motor imagery (MI) using aligned fNIRS-EEG.
Participants: 21 healthy adults (16 right-handed, 5 ambidextrous), aged 18-65 years
Equipment & Setup:
Alignment Methodology:
Experimental Conditions:
Data Fusion Analysis: Structured Sparse Multiset Canonical Correlation Analysis (ssmCCA) identified brain regions consistently detected by both modalities, revealing activation in left inferior parietal lobe, superior marginal gyrus, and post-central gyrus across all conditions [63].
Table 3: Essential Materials for fNIRS-EEG Alignment Research
| Item | Function | Technical Specifications |
|---|---|---|
| Simultaneous fNIRS-EEG Cap [32] [63] | Integrated mounting platform for both modalities | Elastic fabric with embedded electrode/optode fixtures; multiple sizes for head circumference variation |
| 3D Magnetic Digitizer [63] | Precise spatial localization of sensors | Fastrak/Polhemus systems; accuracy ±0.2-0.8mm; records nasion, inion, preauricular landmarks |
| Conductive EEG Gel | Ensures electrical contact for EEG electrodes | Electrolyte composition; non-interfering with optical signals; appropriate viscosity for stability |
| Cryogenic Thermoplastic [32] | Customizable helmet substrate for improved fit | Softens at ~60°C; moldable to head shape; retains stability when cooled; lightweight |
| Optical Phantoms | fNIRS signal calibration and validation | Tissue-simulating materials with known optical properties; used for system performance verification |
Yes, but with important limitations. Separate systems synchronized via software (e.g., NIRScout and BrainAMP) can improve real-time EEG classification accuracy [32]. However, this approach may not achieve the precision required for microsecond-level EEG analysis, and unified processor systems are recommended for research requiring high temporal precision.
For accessibility compliance, standard text should have a contrast ratio of at least 7:1 against its background, while large-scale text (18pt or 14pt bold) requires at least 4.5:1 [89] [90]. These standards ensure that researchers with visual impairments or color vision deficiencies can interpret alignment diagrams and data visualizations.
Alignment creates synergy by combining EEG's high temporal resolution (milliseconds) with fNIRS's superior spatial resolution (centimeters) [86] [32]. EEG provides direct measurement of neural electrical activity, while fNIRS measures indirect hemodynamic responses through neurovascular coupling [86]. Proper alignment enables researchers to leverage the complementary strengths of both modalities while mitigating their individual limitations.
Several analytical approaches exist:
Q1: What is the primary motivation for fusing fNIRS and EEG signals in brain imaging research? fNIRS and EEG are complementary neuroimaging techniques. EEG provides millisecond-level temporal resolution to capture neural electrical activity, while fNIRS measures hemodynamic responses with better spatial localization and is more robust to motion artifacts. Their integration provides a more comprehensive picture of brain activity by combining fast neural dynamics with improved spatial information. [8] [24]
Q2: What are the most common types of noise and artifacts affecting fNIRS signals, and how can they be addressed? fNIRS signals are contaminated by various physiological noises, including cardiac (~1 Hz), respiratory (~0.3 Hz), and Mayer waves (~0.1 Hz). Motion artifacts are also prevalent. Common remediation strategies include using digital filters (particularly band-pass finite impulse response filters), motion artifact correction algorithms, and employing short-separation measurements to regress out superficial contaminants. [23] [8]
Q3: How do artifact removal approaches typically differ between EEG and fNIRS in multimodal studies? While many studies incorporate robust artifact handling for EEG (e.g., for ocular and muscle activity), confounder correction in fNIRS often remains limited primarily to filtering or motion artifact removal. Furthermore, short-separation measurements and other auxiliary signals for fNIRS remain underutilized in many fusion studies. [8]
Q4: What are the main categories of fusion strategies for integrating EEG and fNIRS data? Fusion methods can be categorized as:
Q5: Why is data preprocessing standardization particularly important for fNIRS research? The fNIRS research landscape shows significant heterogeneity in analysis approaches and pre-processing procedures. Often, there is a lack of complete methodological description, making study replication and results comparison challenging. Standardization enhances the reliability, repeatability, and traceability of reported findings. [23] [91]
Problem: Hemodynamic responses are masked by strong physiological noise or motion artifacts.
Solution:
Problem: A hybrid fNIRS-EEG Brain-Computer Interface (BCI) or decoding model yields lower accuracy than expected.
Solution:
Problem: Inability to reproduce results from a published fNIRS or fNIRS-EEG study.
Solution:
Table 1: Standardized Preprocessing Steps for fNIRS and EEG Data
| Step | fNIRS | EEG |
|---|---|---|
| Raw Data Inspection | Visual inspection for motion artifacts, heart beat oscillations. [23] | Visual inspection for amplitude jumps, muscle artifacts. [92] |
| Filtering | Band-pass filter (e.g., 0.01-0.2 Hz) to remove drift and cardiac noise. [23] | High-pass (e.g., 0.5-1 Hz) & Low-pass (e.g., 40 Hz) to remove slow drifts and line noise. [92] [93] |
| Artifact Correction | Motion artifact correction (e.g., wavelet, tPCA). [23] | ICA for ocular and muscle artifact removal. [92] |
| Referencing | - | Re-referencing (e.g., to average reference). [92] |
| Epoching | Epoching relative to task onset, with a sufficiently long baseline. | Epoching relative to event markers. |
| Bad Channel/ Trial Rejection | Signal quality inspection, CV/SNR-based rejection. | Statistical methods (e.g., Autoreject). [92] |
This protocol is adapted from a multimodal dataset containing simultaneous EEG-fNIRS recordings during motor imagery tasks. [93]
1. Participants:
2. Experimental Paradigm:
3. Data Acquisition Specifications: Table 2: Data Acquisition Parameters for a Multimodal Experiment
| Parameter | EEG | fNIRS |
|---|---|---|
| System | Neuroscan SynAmps2 amplifier | NIRScout system (NIRx) |
| Channels | 64 electrodes | 8 sources, 8 detectors (forming 24 channels) |
| Sampling Rate | 1000 Hz | 7.8125 Hz |
| Reference | Left mastoid (M1) | - |
| Filter during Acquisition | 0.5-100 Hz Band-pass, 50 Hz Notch | - |
| Optode/Electrode Placement | International 10-20 system | Left hemisphere, International 10-5 system |
| Impedance/Quality Check | < 10 kΩ | Visual inspection for good optical coupling |
Table 3: Essential Research Reagent Solutions for fNIRS-EEG Fusion
| Item | Function/Application |
|---|---|
| NIRScout System (fNIRS) | A continuous-wave fNIRS device used for measuring changes in oxy- and deoxy-hemoglobin concentrations in the cortex. [93] |
| Neuroscan SynAmps2 (EEG) | An amplifier system for high-quality, multi-channel EEG data acquisition. [93] |
| BrainVision Analyzer | Software for comprehensive analysis of EEG and fNIRS data, including preprocessing, visualization, and statistical analysis. [94] |
| MNE-Python | An open-source Python package for exploring, visualizing, and analyzing human neurophysiological data (EEG, MEG, fNIRS). It supports preprocessing, source decomposition, and machine learning. [92] |
| Short-Separation fNIRS Channels | Special fNIRS source-detector pairs placed close together (~8 mm) to selectively measure systemic physiological noise from the scalp, enabling its regression from standard channels. [8] |
| Dirichlet Distribution & DST | Mathematical frameworks for modeling uncertainty in classifier outputs and fusing decisions in a robust manner, improving final classification accuracy. [57] |
| Cross-Modal Attention Mechanism | A deep learning component that allows a model to dynamically focus on the most relevant features from EEG and fNIRS modalities for a given task. [24] |
FAQ 1: What are the most critical steps to ensure our fNIRS-EEG pipeline is reproducible? Reproducibility hinges on robust data management practices from the start. Key steps include:
FAQ 2: We experience significant artifacts in our fNIRS data during participant movement. How can we correct for this? Motion artifacts are a common challenge. A multi-pronged approach is recommended:
FAQ 3: Our EEG and fNIRS signals are not properly synchronized. What is the best method for temporal alignment? Precise synchronization is fundamental for fusion. There are two primary methods:
FAQ 4: At which stage of analysis should we fuse EEG and fNIRS data for the best classification results? Research indicates that the fusion stage impacts performance. A study on motor imagery classification found that early-stage fusion of EEG and fNIRS, where raw or pre-processed data from both modalities is combined before feature extraction, yielded significantly higher accuracy compared to middle-stage (feature-level) or late-stage (decision-level) fusion [59]. A Y-shaped neural network architecture can be an effective design for implementing early-stage fusion [59].
Problem: Inconsistent Data Quality Across Recording Sessions
Problem: Poor Performance of Multimodal Classification Algorithm
Problem: Pipeline is Not Reproducible by Other Team Members
The table below summarizes key specifications from a seminal study that created a hybrid EEG-fNIRS dataset for motor imagery, serving as a reference for designing reproducible experiments [97].
Table 1: Experimental Protocol and Data Specifications from the HEFMI-ICH Dataset
| Aspect | Specification |
|---|---|
| Participants | 17 normal subjects (12M/5F, 23.6 ± 1.8 yrs) & 20 ICH patients (17M/3F, 50.8 ± 10.3 yrs) [97] |
| Primary Paradigm | Left-hand vs. Right-hand Motor Imagery (MI) [97] |
| Trial Structure | Visual cue (2s) → Execution/MI (10s) → Rest (15s) [97] |
| EEG System | g.HIamp amplifier [97] |
| fNIRS System | NirScan, continuous-wave system [97] |
| EEG Sampling Rate | 256 Hz [97] |
| fNIRS Sampling Rate | 11 Hz [97] |
| EEG Channels | 32 electrodes [97] |
| fNIRS Channels | 90 measurement channels from 32 sources & 30 detectors [97] |
| Synchronization | Event markers from E-Prime 3.0 simultaneously triggered both systems [97] |
Table 2: Performance of Different Fusion Strategies in Motor Imagery Classification
| Fusion Stage | Description | Average Accuracy | Key Advantage |
|---|---|---|---|
| Early-Stage Fusion | Combining raw/pre-processed data before feature extraction [59]. | 76.21% [59] | Allows the model to learn complementary features directly from the data. |
| Middle-Stage (Feature-Level) Fusion | Extracting features from each modality first, then concatenating them [8]. | ~65-70% (EEG-only ~65%) [59] | Leverages domain knowledge for feature engineering. |
| Late-Stage (Decision-Level) Fusion | Each modality has a separate classifier, and their outputs are combined [8]. | ~57-65% (fNIRS-only ~57%) [59] | Robust to failures in one modality. |
Table 3: Essential Research Reagents and Materials for fNIRS-EEG Fusion
| Item | Function in the Experiment |
|---|---|
| Hybrid EEG-fNIRS Cap | A custom cap that integrates EEG electrodes and fNIRS optodes in a predefined geometry, ensuring co-registration of both modalities. Can be based on standard elastic caps or custom 3D-printed/thermoplastic designs [32] [97]. |
| Continuous-Wave fNIRS System | Measures changes in oxygenated (HbO) and deoxygenated hemoglobin (HbR) concentrations in the cortex using near-infrared light, providing hemodynamic data [8] [97]. |
| Bioamplifier for EEG | Records electrical potentials from the scalp with high temporal resolution, capturing neural electrophysiological activity [97]. |
| 3D Magnetic Digitizer | Records the precise 3D locations of fNIRS optodes and EEG electrodes on a participant's head. This is critical for accurate spatial localization and co-registration with brain anatomy [63]. |
| Structured Sparse Multiset CCA (ssmCCA) | An advanced data fusion algorithm used to identify brain regions where both electrical (EEG) and hemodynamic (fNIRS) activities are consistently detected, strengthening findings [63]. |
| Public Dataset (e.g., HEFMI-ICH) | Provides a benchmark for validating and optimizing new algorithms. Ensures that methods are tested against standardized, clinically relevant data [97]. |
Q1: What are the most critical signal quality issues when simultaneously acquiring EEG and fNIRS data?
Both EEG and fNIRS signals are contaminated by various artifacts that can compromise data quality and subsequent analysis. For EEG, the primary contamination sources include ocular activity (EOG) and head/neck muscle activity (EMG), which introduce noise across much of the relevant EEG spectrum [8]. For fNIRS, contamination originates from scalp circulation, brain motion in cerebro-spinal fluid, and systemic physiology affecting brain vasculature (e.g., blood pressure changes, breathing, heart rate) [8]. Critically, while the same physiological source (e.g., cardiac activity) can affect both modalities, it manifests with distinct temporal, spatial, and amplitude characteristics in each [8].
Q2: How can I determine if my classification accuracy for a hybrid BCI system is acceptable?
Classification accuracy must be evaluated against baseline performance and state-of-the-art benchmarks. The table below summarizes representative accuracy values from recent studies:
Table 1: Classification Accuracy Benchmarks for Hybrid EEG-fNIRS BCI Systems
| Study / Model | Task | Modality | Reported Accuracy | Benchmark Comparison |
|---|---|---|---|---|
| Deep Learning & Evidence Theory [57] | Motor Imagery | EEG-fNIRS Fusion | 83.26% | 3.78% improvement over state-of-the-art |
| EFRM (Few-shot learning) [56] | Various | EEG-fNIRS | Competitive with supervised models | High performance with minimal labeled data |
| Typical fNIRS-only [56] | Various | fNIRS-only | Lower than multimodal | Improved via shared domain learning with EEG |
| ShallowConvNet (EEG-only) [93] | Motor Imagery (Hand vs. Shoulder) | EEG-only | 65.49% | Baseline for complex joint MI tasks |
Q3: What metrics should I use beyond classification accuracy to evaluate my model properly?
While accuracy is a fundamental metric, a comprehensive evaluation should include additional dimensions:
Q4: My fusion model isn't performing better than my single-modality model. What could be wrong?
This common issue often stems from inadequate fusion strategy or misalignment of multimodal data. Consider these points:
Table 2: EEG Artifact Troubleshooting Guide
| Symptom | Potential Cause | Solution | Preventive Measures |
|---|---|---|---|
| High-frequency noise | Muscle artifact (EMG) from head, neck, or jaw tension | Apply notch filter (e.g., 50/60 Hz) and band-pass filter (e.g., 0.5–100 Hz) [93]. | Instruct subjects to relax jaw and minimize swallowing during task periods [93]. |
| Low-frequency drifts | Poor electrode contact or skin potentials | Check and improve electrode impedance (maintain below 10 kΩ) [93]. | Proper skin abrasion and use of high-conductivity electrolyte gel. |
| Large, slow deflections | Ocular artifact (EOG) from eye blinks or movements | Implement artifact removal algorithms (e.g., ICA, regression) in preprocessing. | Instruct subjects to fixate on a point and suppress blinks during critical task intervals [93]. |
Table 3: fNIRS Signal Troubleshooting Guide
| Symptom | Potential Cause | Solution | Preventive Measures |
|---|---|---|---|
| No detectable HbO/HbR concentration change | Improper optode contact or scalp coupling | Verify signal quality at each channel before experiment; re-adjust cap. | Use a cap size appropriate for the subject's head circumference [97]. Ensure hair does not block optodes. |
| Signal appears saturated or abnormally high | Optode pressure on scalp, causing poor blood flow | Readjust cap to ensure firm but comfortable contact without excessive pressure. | Regularly monitor raw light intensity levels during data acquisition. |
| High-amplitude, sharp spikes in signal | Motion artifacts from head movement | Apply motion artifact correction algorithms (e.g., PCA, wavelet-based methods) [8]. | Stabilize subject's head with a chin rest or headrest. Use short-separation channels to regress out superficial noise [8]. |
| Systemic physiological contamination | Cardiac, respiratory cycles, blood pressure waves | Employ filtering (e.g., 0.01–0.2 Hz bandpass for HbO/HbR) and use short-separation regression [8]. | Maintain a comfortable lab environment to minimize subject anxiety and physiological fluctuations. |
Step 1: Verify Individual Modality Performance
Step 2: Re-examine the Fusion Strategy and Model Architecture
Step 3: Ensure Proper Temporal Synchronization and Alignment
This protocol outlines the key steps for a simultaneous EEG-fNIRS experiment, from setup to data preprocessing [100].
Choosing the right fusion approach depends on data characteristics and the research goal. The following diagram outlines a logical decision process.
Table 4: Essential Materials and Equipment for Hybrid EEG-fNIRS Research
| Item | Specification / Example | Primary Function | Key Considerations |
|---|---|---|---|
| EEG Amplifier | g.HIamp (g.tec) [97], Neuroscan SynAmps2 [93] | Records electrical brain activity from scalp electrodes. | Check sampling rate (≥256 Hz), number of channels, input impedance, and synchronization capability. |
| fNIRS System | NIRScout (NIRx) [93], NirScan (Danyang Huichuang) [97] | Measures hemodynamic changes by detecting near-infrared light attenuation. | Check number of sources/detectors, wavelengths, sampling rate (≥10 Hz), and portability if needed. |
| Integrated Cap | Custom-designed hybrid cap with EEG electrodes and fNIRS optodes [97] | Holds EEG electrodes and fNIRS optodes in a stable, geometrically defined layout. | Ensure compatibility of montage with your target brain areas (e.g., motor cortex). Verify head size options. |
| Electrodes & Gel | Ag/AgCl sintered electrodes, high-conductivity electrolyte gel [93] | Ensure high-fidelity electrical contact between scalp and amplifier. | Maintain impedance below 10 kΩ throughout the experiment [93]. |
| fNIRS Optodes | Sources (lasers/LEDs), Detectors (photodiodes/APDs) | Emit and detect near-infrared light through the scalp and brain tissue. | Source-detector separation is typically 3 cm for adult cerebral measurements [97]. |
| Stimulus Presentation Software | E-Prime, PsychoPy, Presentation | Prescribes the experimental paradigm and delivers synchronized triggers. | Must be able to send simultaneous, low-latency triggers to both EEG and fNIRS recording systems. |
| Data Processing & Analysis Platform | MATLAB, Python (MNE, NiLearn, PyTorch) | Provides environment for implementing preprocessing, fusion, and classification algorithms. | Choose platforms with active community support and specialized toolboxes for EEG/fNIRS analysis. |
Within the data preprocessing pipelines for hybrid functional near-infrared spectroscopy (fNIRS) and electroencephalography (EEG) research, a critical decision point is the level at which data from these two modalities are integrated. This integration, known as fusion, typically occurs at different stages, with early fusion and late fusion representing two predominant strategies. The choice between them significantly impacts the performance and interpretability of brain-computer interface (BCI) systems. This guide provides a technical breakdown of these methods, their experimental protocols, and performance outcomes to assist researchers in selecting and troubleshooting the optimal fusion approach for their specific applications.
1. What is the fundamental difference between early and late fusion?
Early fusion, also known as feature-level fusion, involves combining the raw or pre-processed features from EEG and fNIRS before feeding them into a single, unified classification model. In contrast, late fusion (decision-level fusion) processes each modality through separate models and combines the final decisions or classifier outputs [101].
2. Why does the choice of fusion level matter for fNIRS-EEG research?
EEG and fNIRS capture complementary aspects of brain activity; EEG provides millisecond-level temporal resolution of electrical activity, while fNIRS offers better spatial resolution for slower hemodynamic changes [32] [27]. The fusion strategy determines how effectively this complementary information is exploited. An suboptimal strategy may fail to capture critical cross-modal interactions, leading to reduced classification accuracy and robustness [9].
3. What does the evidence say about which fusion method performs better?
Recent comparative studies and deep learning models have consistently demonstrated the superiority of early-stage fusion. For instance, one study using a Y-shaped neural network found that early-stage fusion significantly outperformed middle and late-stage fusion in classifying left-or-right hand motor imagery tasks [54] [102]. Another study proposing the "DeepSyncNet" framework reported that its early and deep fusion strategy outperformed traditional multimodal fusion techniques [9].
4. What are the practical hardware requirements for implementing early fusion?
Successful early fusion relies on high-quality, synchronized data. This typically requires:
Problem: Low classification accuracy after feature concatenation (Early Fusion).
Problem: Model fails to learn meaningful cross-modal relationships.
Problem: System is unable to achieve precise temporal synchronization.
The following table summarizes key experimental findings from seminal studies comparing fusion methods, providing a quantitative basis for decision-making.
| Study / Model | Fusion Type | Task (Dataset) | Key Methodology | Performance (Accuracy) |
|---|---|---|---|---|
| Y-shaped Network [54] [102] | Early-Stage | Motor Imagery (Dataset A) | Y-shaped ANN integrating bimodal data in initial layers. | 76.21% (average, N=29) |
| Y-shaped Network [54] [102] | Middle-Stage | Motor Imagery (Dataset A) | Y-shaped ANN fusing data in intermediate network layers. | Lower than early-stage (P < 0.05) |
| Y-shaped Network [54] [102] | Late-Stage | Motor Imagery (Dataset A) | Y-shaped ANN combining data in final decision layers. | Lower than early-stage (P < 0.05) |
| Mutual Information & Feature Selection [101] | Feature-Level | Visuo-Mental Task (ALS patients/controls) | Mutual information-based feature selection to optimize complementarity. | Considerably improved hybrid performance vs. single modality and conventional fusion. |
| DeepSyncNet [9] | Early and Deep Fusion | Motor Imagery & Mental Arithmetic | 3D tensor input, Attentional Fusion (AF), and Spatiotemporal Attention (STA). | Outperformed traditional multimodal fusion techniques. |
This protocol is based on the study that directly compared fusion stages [54] [102].
1. Dataset:
2. Data Preprocessing:
3. Fusion & Classification Architecture:
4. Key Finding: The early-stage fusion configuration achieved statistically significantly higher performance than both middle-stage and late-stage fusion, establishing its efficacy for motor imagery classification [54] [102].
The following diagram illustrates the core architectural difference between early and late fusion strategies in a hybrid fNIRS-EEG pipeline.
The table below lists essential "research reagents"—in this context, key datasets, algorithms, and hardware solutions—that are fundamental for conducting research in fNIRS-EEG fusion.
| Item | Function / Description | Relevance to Fusion Research |
|---|---|---|
| Public Datasets | Provide standardized, annotated data for developing and benchmarking fusion algorithms. | Shin's Dataset [54]: A foundational public dataset for motor imagery and mental arithmetic. HEFMI-ICH [97]: The first hybrid dataset including Intracerebral Hemorrhage (ICH) patients, crucial for clinical translation. |
| Mutual Information-based Feature Selection [101] | A filter-based feature selection algorithm that maximizes feature complementarity and relevance while minimizing redundancy. | Critical for optimizing feature-level (early) fusion, preventing classifier confusion from high-dimensional, concatenated feature vectors. |
| Y-shaped Neural Network [54] [102] | A deep learning architecture with separate input branches for each modality that merge at a specific fusion stage. | The standard model for empirically comparing early, middle, and late fusion performance within a consistent framework. |
| g.tec g.HIamp & g.Nautilus [103] | Biosignal amplifiers that can be integrated with fNIRS add-ons (g.SENSOR) for simultaneous data acquisition. | Provides hardware-level synchronization, a prerequisite for reliable early fusion. |
| Custom Hybrid Caps [32] [97] | EEG caps with integrated holders for fNIRS optodes, ensuring fixed and co-registered sensor placement. | Maintains consistent spatial correspondence between EEG electrodes and fNIRS channels, which is vital for meaningful feature fusion. |
| Attentional Fusion (AF) Mechanism [9] | A deep learning module that uses a gating mechanism to adaptively weight and integrate features from EEG and fNIRS. | Represents an advanced early fusion technique that dynamically enhances cross-modal information interaction. |
This technical support center is designed to assist researchers and scientists working at the intersection of motor imagery (MI) based Brain-Computer Interfaces (BCIs) and multi-modal neuroimaging. The guidance provided here is framed within a broader research context focusing on data preprocessing pipelines for functional near-infrared spectroscopy (fNIRS) and electroencephalography (EEG) fusion. The following sections address frequently encountered technical challenges, provide detailed experimental protocols, and offer standardized solutions to ensure reproducible research outcomes in both academic and drug development settings.
Poor classification accuracy often stems from multiple sources, including insufficient training data, inappropriate feature extraction, and suboptimal channel selection.
Simultaneous fNIRS-EEG data acquisition is prone to specific noise types that require targeted pre-processing strategies.
A 'RAILED' error indicates that the input signal is exceeding the dynamic range of the analog-to-digital converter, causing the signal to "rail" or clip at the maximum or minimum voltage.
While Linear Discriminant Analysis (LDA) and Support Vector Machines (SVM) are common, recent advances in deep learning offer significant improvements.
This protocol is essential for training a user-specific MI classifier [104] [105].
motorimg_calibrate.py) to display visual cues ('L' for left hand, 'R' for right hand) and record corresponding marker streams.This protocol outlines a standardized workflow for fusing EEG and fNIRS data, critical for neurovascular coupling analysis [35] [21].
Table 1: Synchronous EEG-fNIRS Preprocessing Steps
| Step | Modality | Toolbox | Parameters & Actions |
|---|---|---|---|
| Data Import & Synchronization | EEG & fNIRS | Custom Script | Align data streams using recorded timing markers. |
| Preprocessing | EEG | EEGLAB | 1. Downsample to 250 Hz.2. High-pass filter at 1 Hz.3. Remove line noise (cleanline).4. Reject bad channels and interpolate.5. Apply Artifact Subspace Reconstruction (ASR).6. Re-reference to global average [21]. |
| fNIRS | HOMER3 | 1. Convert light intensity to optical density.2. Detect and correct motion artifacts (e.g., Savitzky-Golay filter).3. Bandpass filter (0.01 - 0.1 Hz).4. Convert to chromophore concentration (HbO, HbR) using the Modified Beer-Lambert Law [12] [21]. | |
| Feature Extraction | EEG | EEGLAB | Filter into classic frequency bands: Delta (1-4Hz), Theta (4-8Hz), Alpha (8-13Hz), Beta (13-30Hz), Gamma (30-40Hz) [106]. |
| Joint | Custom Script | Perform Multi-Band Local Neurovascular Coupling (MBLNVC) analysis by reconstructing EEG source activity onto fNIRS channel locations [35]. |
The following workflow diagram illustrates the integrated preprocessing pipeline for synchronous EEG-fNIRS data:
Table 2: Key Hardware and Software for MI-BCI and fNIRS-EEG Research
| Item Name | Category | Specification / Version | Primary Function in Research |
|---|---|---|---|
| OpenBCI Cyton Board | Hardware | Board with Daisy Module | A versatile, bio-sensing board for high-quality EEG data acquisition [104]. |
| Explore Pro | Hardware | 64-channel EEG device | High-density EEG recording system compatible with LSL for real-time data streaming [105]. |
| fNIRS System | Hardware | e.g., NirSmart-6000A | Measures cortical hemodynamic changes (HbO/HbR) via near-infrared light [35]. |
| Lab Streaming Layer (LSL) | Software | Protocol | A unified system for the simultaneous recording of brain signals and event markers across multiple devices [104] [105]. |
| NeuroPype | Software | Suite | A comprehensive software for designing and executing BCI pipelines, including CSP and GLM [104]. |
| EEGLAB | Software | Toolbox | An open-source MATLAB toolbox for processing EEG signals, including ICA for artifact removal [35] [21]. |
| HOMER3 | Software | Toolbox | An open-source package for converting, processing, and visualizing fNIRS data [21]. |
Table 3: Standardized Filtering Parameters for Preprocessing
| Noise Type | Modality | Filter Type | Recommended Cut-off Frequencies | Purpose |
|---|---|---|---|---|
| Low-Frequency Drift | EEG | High-Pass | 1 Hz | Remove slow signal drifts [21]. |
| Powerline Noise | EEG | Notch | 50 Hz / 60 Hz | Remove mains electricity interference [105]. |
| High-Frequency Noise | EEG | Low-Pass | 40 Hz | Remove muscle and other high-freq. artifacts [35]. |
| Neurovascular Coupling Signal | fNIRS | Bandpass | 0.01 - 0.1 Hz | Isolate the functional hemodynamic response [12] [21]. |
| Systematic Physiological Noise | fNIRS | Bandpass / Wavelet | Custom (e.g., 0.5-2 Hz for cardiac) | Remove cardiac, respiratory, and Mayer wave influences [12]. |
Table 4: Comparison of Classifiers for Motor Imagery BCI
| Classifier | Key Advantages | Key Disadvantages | Reported Accuracy Range |
|---|---|---|---|
| Linear Discriminant Analysis (LDA) | Simple, fast, works well on linearly separable data. | Assumes normal distribution and equal covariance; struggles with complex patterns. | Foundational algorithm, widely used [105]. |
| Support Vector Machine (SVM) | Effective in high-dimensional spaces; versatile with kernel functions. | Can be computationally intensive; performance depends on kernel choice. | Can achieve high accuracy with good features [105]. |
| Random Forest (RF) | Robust to overfitting; handles non-linear relationships well. | Less interpretable than linear models; can be computationally heavy. | ~80.5% (as reported in [35]) |
| EEG-Specific Transformers | Captures complex temporal dependencies; state-of-the-art performance. | Requires large datasets; computationally intensive to train. | ~82.1% (SVM in a fusion study [35]) |
FAQ 1: What are the primary advantages of using a combined fNIRS-EEG system over either modality alone for cognitive state decoding?
The primary advantage is the synergistic combination of their complementary strengths. EEG provides direct measurement of neural electrical activity with high temporal resolution (millisecond precision), making it ideal for capturing rapid cognitive processes. fNIRS measures the hemodynamic response linked to neural activity with better spatial resolution (around 1 cm), helping to localize the brain regions involved. Using them together provides a more comprehensive picture of brain function, capturing both the fast dynamics and the localized areas of activation [108] [109] [110].
FAQ 2: My decoding performance is poor. Could my EEG preprocessing choices be the cause?
Yes, preprocessing choices significantly influence decoding performance. A 2025 study systematically varying preprocessing steps found that choices like filtering and artifact correction considerably impact classification results [111].
FAQ 3: How reproducible are fNIRS analysis results, and what factors affect them?
Reproducibility in fNIRS can be variable and is influenced by several factors. A large-scale initiative (FRESH) found that nearly 80% of research teams agreed on group-level results when hypotheses were strongly supported by literature. However, agreement was lower at the individual level. Key sources of variability include [53]:
FAQ 4: How can I effectively synchronize EEG and fNIRS systems during data acquisition?
Precise synchronization is crucial for multimodal analysis. Two common methods are:
FAQ 5: What is the best way to handle motion artifacts in fNIRS signals?
Motion artifacts are a significant challenge and can be addressed through hardware and algorithmic solutions.
Problem: Your machine learning model fails to reliably decode cognitive states (e.g., different levels of working memory load) from your fNIRS-EEG data.
Potential Causes and Solutions:
Cause: Inadequate Preprocessing.
Cause: Non-Informative Features.
Cause: Poor Experimental Design for Multimodality.
Problem: The recorded EEG or fNIRS signals are excessively noisy when the systems are used together.
Potential Causes and Solutions:
Cause: Physical Interference Between Sensors.
Cause: Persistent Motion Artifacts.
Cause: Improper Referencing or Filtering in EEG.
The following table summarizes a representative experimental protocol for semantic decoding, which can be adapted for working memory tasks involving different types of mental imagery.
Table 1: Experimental Protocol for Semantic Category Decoding with Mental Imagery
| Component | Description |
|---|---|
| Core Task | Differentiate between semantic categories (e.g., Animals vs. Tools) during various mental imagery tasks [113]. |
| Participants | Native speakers (if task involves silent naming); right-handed individuals; normal/corrected-to-normal vision [113]. |
| Stimuli | 18 images from each category (e.g., Animals: cat, dog, elephant; Tools: hammer, saw, scissors). Images are gray-scaled, cropped, and presented on a white background [113]. |
| Mental Tasks | 1. Silent Naming: Silently name the object.2. Visual Imagery: Visualize the object in the mind.3. Auditory Imagery: Imagine the sounds associated with the object.4. Tactile Imagery: Imagine the feeling of touching the object [113]. |
| Trial Structure | 1. Stimulus presentation.2. Cued mental task execution for a fixed duration (e.g., 3-5 seconds).3. Inter-trial interval [113]. |
| Data Acquisition | Record simultaneous EEG and fNIRS. Instruct participants to minimize physical movements during the mental task period [113]. |
The diagram below illustrates the logical workflow for a typical simultaneous fNIRS-EEG experiment, from setup to data fusion.
Table 2: Key Materials for fNIRS-EEG Fusion Research
| Item | Function in Research |
|---|---|
| Simultaneous EEG-fNIRS Cap | A specialized headcap (e.g., actiCAP with 128+ slits) that physically accommodates both EEG electrodes and fNIRS optodes, often using the international 10-20 system for co-registration [109] [110]. |
| fNIRS System | Hardware that emits near-infrared light and detects its attenuation after passing through brain tissue to measure changes in oxygenated and deoxygenated hemoglobin concentrations, providing an indirect measure of neural activity [108] [109]. |
| EEG System | Hardware that amplifies and records electrical potentials from the scalp, providing a direct, high-temporal-resolution measure of neural population activity [108] [109]. |
| Accelerometer / IMU | Auxiliary hardware often attached to the acquisition cap to provide a reference signal for motion, which is crucial for effective motion artifact removal in fNIRS data [37]. |
| Synchronization Interface | Hardware (e.g., parallel port for TTL triggers) or software (e.g., Lab Streaming Layer - LSL) that ensures temporal alignment of events and data streams between the EEG and fNIRS systems [109] [110]. |
| Preprocessing Software Toolboxes | Open-source software packages like EEGLab (for EEG) and HOMER3 (for fNIRS) that provide standardized functions for filtering, artifact removal, and other preprocessing steps [21]. |
The tables below summarize key quantitative data and preprocessing steps for both modalities.
Table 3: Characteristic Comparison of EEG and fNIRS
| Feature | EEG | fNIRS |
|---|---|---|
| What It Measures | Electrical activity from post-synaptic potentials | Hemodynamic response (HbO & HbR concentration) |
| Temporal Resolution | High (milliseconds) [108] [110] | Low (seconds) [108] [110] |
| Spatial Resolution | Low (centimeter-level) [108] | Moderate (better than EEG, ~1 cm) [108] [110] |
| Sensitivity to Motion | High [108] | Low to Moderate [108] |
| Best Use Cases | Fast cognitive tasks, ERPs, rapid neural dynamics [108] | Sustained cognitive states, localization, naturalistic studies [108] |
This diagram conceptualizes the complementary relationship between EEG and fNIRS signals in measuring brain activity.
Table 4: Example Preprocessing Steps for EEG and fNIRS
| Step | EEG Protocol [21] | fNIRS Protocol [21] |
|---|---|---|
| Downsampling | To 250 Hz | - |
| Filtering | High-pass at 1 Hz; Remove line noise | Bandpass filter (e.g., 0.01 - 0.1 Hz) for neurovascular coupling |
| Artifact Removal | Clean rawdata; ASR; ICA | Motion artifact correction (e.g., Savitzky-Golay filtering, PCA/ICA) |
| Other Steps | Re-referencing (e.g., to global average); Spherical spline interpolation for bad channels | Convert optical density to chromophore concentration (e.g., HbO) |
Q1: Our hybrid EEG-fNIRS classification accuracy for stroke patients is poor. What are the main challenges and solutions?
A: Poor classification accuracy in patient populations often stems from neurophysiological heterogeneity and signal artifacts. Key challenges and solutions include:
Q2: How do we validate that our fused EEG-fNIRS features are truly capturing clinically relevant biomarkers?
A: Clinical validation requires linking multimodal features to established clinical scales and outcomes.
Table 1: Clinically Validated Multimodal Biomarkers for Stroke/ICH Recovery
| Biomarker | Modality | Clinical Correlation | Associated Clinical Scale |
|---|---|---|---|
| Power Ratio Index (PRI) | EEG | Higher PRI predicts poorer functional motor outcome [114]. | FMA, mRS |
| Brain Symmetry Index (BSI) | EEG | Higher asymmetry correlates with worse neurological status and motor function [114]. | NIHSS, FMA-UE |
| Event-Related Desynchronization (ERD) | EEG | Magnitude correlates with residual motor function in the paretic arm [116]. | BBS, FMA |
| HbO Concentration | fNIRS | Increased activation in motor areas correlates with better balance function [116]. | BBS |
| Multimodal Fusion Accuracy | EEG-fNIRS | ~5-10% improvement in classification accuracy over unimodal approaches [101] [97]. | Diagnostic/Prediction Accuracy |
Table 2: Key Datasets for Clinical Validation of EEG-fNIRS Fusion
| Dataset Name | Population | Task | Key Clinical Relevance |
|---|---|---|---|
| HEFMI-ICH [97] | 17 Normal subjects, 20 ICH patients | Left/Right hand Motor Imagery (MI) | First hybrid dataset specifically for ICH rehabilitation research. |
| Private Data [115] | 17 Normal controls, 13 ICH patients | Motor Imagery (MI) | Enables transfer learning from healthy templates to patient data. |
Protocol 1: Motor Imagery for ICH Rehabilitation [97] [115]
Protocol 2: Ankle Dorsiflexion for Balance Prediction [116]
EEG-fNIRS Clinical Validation Pipeline
Troubleshooting Low Classification Accuracy
Table 3: Essential Research Reagents & Solutions for EEG-fNIRS Clinical Studies
| Item | Function/Description | Example/Specification |
|---|---|---|
| Hybrid EEG-fNIRS Cap | Integrated headgear with co-located electrodes and optodes for simultaneous data acquisition. | Custom-designed cap with 32 EEG electrodes, 32 fNIRS sources, and 30 detectors [97]. |
| Synchronization Interface | Ensures temporal alignment of EEG and fNIRS data streams with experimental events. | E-Prime 3.0 software sending event markers to both acquisition systems [97] [116]. |
| Clinical Assessment Scales | Standardized tools to quantify patient impairment and recovery for biomarker validation. | Fugl-Meyer Assessment (FMA), Berg Balance Scale (BBS), Modified Rankin Scale (mRS) [97] [114] [116]. |
| Mutual Information Toolbox | Computational tool for feature selection to maximize complementarity and minimize redundancy. | Used to optimize fused feature subsets for improved classifier performance [101]. |
| Structured Sparse Multiset CCA (ssmCCA) | Advanced data fusion algorithm to identify brain regions consistently active in both EEG and fNIRS. | Used to fuse electrical and hemodynamic responses and pinpoint shared neural regions [63]. |
Q1: What are the most common pitfalls when starting with a public fNIRS-EEG dataset, and how can I avoid them? A1: The most common pitfalls involve ignoring temporal misalignment and modality-specific artifacts. fNIRS has an inherent physiological delay compared to EEG; fNIRS measures slow hemodynamic responses (peaking 5-8 seconds post-stimulus), while EEG captures instantaneous electrical activity [8] [118]. To avoid this:
Q2: My hybrid model isn't performing better than my unimodal model. What could be wrong? A2: This often indicates an issue with feature-level fusion or model architecture. Simply concatenating features from both modalities does not guarantee better performance.
Q3: I have limited data for my specific task. How can synthetic data help? A3: Synthetic data can significantly enhance training sets, mitigating overfitting and improving model generalization.
Issue 1: Poor Classification Accuracy in Motor Imagery Tasks
Issue 2: Handling Motion Artifacts and Physiological Noise
Issue 3: Synchronizing EEG and fNIRS Data Streams
The table below summarizes key publicly available datasets for fNIRS-EEG fusion research, highlighting their scope and application.
| Dataset Name | Modality | Subjects & Cohorts | Key Tasks | Key Specifications | Primary Research Use |
|---|---|---|---|---|---|
| Multi-modal EEG-fNIRS [93] | EEG & fNIRS | 18 Healthy | 8 MI tasks of hand, wrist, elbow, shoulder | 64-channel EEG, 24-channel fNIRS; 5760 trials | Developing decoding algorithms for multi-joint MI |
| HEFMI-ICH [97] | EEG & fNIRS | 17 Healthy, 20 ICH Patients | Left/Right hand MI | 32-channel EEG, 90-channel fNIRS; synchronized acquisition | ICH rehabilitation, clinical BCI translation |
| WBCIC-MI [120] | EEG | 62 Healthy | 2-class & 3-class MI (hand, foot) | 64-channel EEG; multi-session (3 days) | Cross-session and cross-subject MI-BCI research |
| Synthetic fNIRS-EEG [8] | Synthetic fNIRS & EEG | Simulated | Finger-tapping motor task | Simulates shared neuronal source with ground truth | Method development and validation for fusion algorithms |
1. Protocol: Multi-joint Motor Imagery [93]
2. Protocol: HEFMI-ICH Clinical Dataset [97]
Synthetic data generation addresses the critical challenge of data scarcity in training deep learning models, which often require large amounts of data to generalize effectively [119].
The following diagram illustrates a hybrid data processing and augmentation pipeline that integrates both real and synthetic data to improve model performance.
This table lists essential tools and software used in the featured experiments for fNIRS-EEG research.
| Tool / Solution | Function / Description | Example Use in Research |
|---|---|---|
| Neuroscan SynAmps2 | EEG signal amplifier for high-quality data acquisition. | Used with a 64-channel cap for EEG recording at 1000 Hz [93]. |
| NIRScout System (NIRx) | fNIRS device for measuring hemodynamic responses. | Configured with 8 sources and 8 detectors to create 24 fNIRS channels [93]. |
| g.HIamp (g.tec) & NirScan | Synchronized hybrid EEG-fNIRS acquisition systems. | Employed for simultaneous recording in clinical ICH studies [97]. |
| E-Prime | Software for designing and running experimental paradigms. | Used to present visual/auditory cues and send trigger markers to both EEG and fNIRS systems [97]. |
| Denoising Diffusion Probabilistic Model (DDPM) | A deep generative model for creating synthetic data. | Applied for EEG-fNIRS data augmentation to improve classifier generalization [119]. |
| ShallowConvNet / EEGNet | Deep learning models specifically designed for EEG and time-series classification. | Served as baseline models for benchmarking classification performance on MI tasks [93] [120]. |
The effective fusion of fNIRS and EEG hinges on robust, standardized preprocessing pipelines that handle the distinct artifacts and characteristics of each modality while leveraging their complementary information through neurovascular coupling. Methodological advancements in data-driven fusion, cross-modal attention, and deep learning are significantly enhancing the spatiotemporal resolution and decoding accuracy of multimodal brain signals. Looking forward, the development of low-code, reproducible software frameworks and the creation of comprehensive, clinically relevant datasets will be crucial for translating these techniques from research labs into practical tools for drug development, personalized neurorehabilitation, and a deeper understanding of brain function in both healthy and pathological states.