What we do:
Electrical Stimulation for Haptic Feedback in VR
EEG-Guided Electrical Stimulation for Immersive Virtual Reality is a project funded by NSF. In collaboration with researchers at UPitt, we are working towards developing sensory models for fingertips when stimulated with electrical current waves spatially and temporally, in order to eventually enable realistic haptic feedback for virtual and mixed reality applications where feeling of texture are essential for enhanced immersive experiences.
TDCS to Improve Motivation & Memory in Elderly (TIME)
Background
Recently, it was shown that a group of elderly, dubbed “superagers”, are indistinguishable from young adults in memory performance as well as the structure of certain brain regions. A key superaging region is mid-cingulate cortex (MCC), a brain structure associated with motivation and tenacity. The goal of the TIME project is to explore the contribution of motivation to memory performance by modulating MCC connectivity with non-invasive brain stimulation. This project combines prior research on superagers, functional magnetic resonance imaging (fMRI) research on network function, and expertise in tDCS and modeling into two innovative studies to provide the first causal evidence that experimentally induced motivation can improve memory performance. Read an interview about superagers with PI Dr. Touroutoglou here.

Study design
In a randomized double-blind placebo-controlled study, we are examining how memory performance can best be influenced with tDCS, additionally investigating the effects of stimulation on motivation and network connectivity. We are comparing three novel tDCS protocols that were designed using computational models of brain stimulation. One of the protocols is individually optimized for each participant. The study consists of five 20-minute stimulation sessions on consecutive days, with a baseline and follow-up on the first and last day consisting of memory tasks and an fMRI scan.
Interested in participating?
We are looking for healthy Northeastern University students 18-35 years of age. Participants will spend ~13 hours in the lab across 6 visits and will be compensated for their time with up to $220 in gift cards. All sessions take place during business hours on consecutive days in the Interdisciplinary Science & Engineering Complex (ISEC) of Northeastern University, located at 805 Columbus Ave in Boston. If you are interested in participating, you can find more information here, and contact us at time.mgh.nu@gmail.com or 617-379-1112.
Team: Sumientra Rampersad (PI Northeastern University), Alexandra Touroutoglou (PI MGH, Harvard Medical School), Dana Brooks (NU), Mark Eldaief (MGH), Lisa Feldman Barrett (NU)
Funding: NIH, National Institute on Aging: 1R21AG061743-01

Securing GNSS-based Infrastructures
This project develops novel anti-jamming techniques for Global Navigation Satellite Systems (GNSS) that are effective, yet computationally affordable. GNSS is ubiquitous in civilian, security and defense applications, causing a growing dependence on such technology for position and timing purposes, particularly in critical infrastructures. The threat of a potential disruption of GNSS is real and can lead to catastrophic consequences. This project studies methods to secure GNSS receivers from jamming interference, and doing so within size, weight, and power (SWAP) requirements. Existing solutions are either bulky and not cost-effective, such as those based on antenna array technology, or specifically adapted to an interference type. In addition, most of these solutions require the detection and classification of the interference before mitigating its effects, which constitutes a single point of error in the process. This project will investigate GNSS receivers that are resilient to interference without requiring detection and classification, by leveraging robust statistics to design methods that require few modifications with respect to state-of-the-art receiver architectures, keeping SWAP requirements comparable to those from standard GNSS receivers. The findings will be implemented and validated on an end-to-end GNSS software-defined radio receiver, successfully transitioning research into practice. Educational activities are closely integrated with this research agenda, including a course developed by the principal investigator and outreach activities.
This research advances knowledge of how robust statistics can be leveraged to design cost-effective and efficient mitigation techniques for anti-jamming GNSS. The main premise of the project is that most interference sources have a sparse representation, on which they can be seen as outliers to the nominal signal model. Tools from robust statistics are then used to discard those outliers in a sound manner, identifying and substituting specific critical operations in GNSS processing. This approach avoids the need for detecting and estimating interference, processes which can cause errors. The project envisions a lightweight, yet robust, GNSS receiver that can be easily adopted in substitution of current GNSS receivers that are supporting operation of critical infrastructures. It will enable reliable and precise anti-jamming technology with drastic SWAP and cost improvements. Particularly, the project will provide a GNSS receiver solution that can cope with common jamming interference. The development of such receiver enhancements, along with their validation in a software receiver, will allow for large-scale deployments of GNSS receivers that are more resilient and reliable.
Principal Investigator: Pau Closas (Northeastern University)
Research Group: Information Processing Lab
Funding: National Science Foundation (CNS-1815349)
Abstract Source: NSF

Pose Estimation: Understanding State of The World
We represent the state of the world in a low-dimensional subspace, called “pose”, which is a succinct interpretable representation of important information in the state. The state can then be estimated and predicted from this low-dimensional representation. The pose can also be used as a semi-supervised generative model to render and expand the labelled examples in the state space for the purpose of data augmentation for deep learning algorithms.
At Augmented Cognition Lab (ACLab), currently we work on three active projects, in which they explore different aspects of pose estimations (the cited papers are available at ACLab webpage alongside their datasets/codes):
(1) Articulated/Deformable Body Pose Estimation:
- “Inner Space Preserving Generative Pose Machine,” ECCV 2018.
- “A Semi-Supervised Data Augmentation Approach using 3D Graphical Engines,” ECCV/HBU 2018.
- “Moving Object Detection through Robust Matrix Completion Augmented with Objectness,” J-STSP 2018.
- “In-Bed Pose Estimation: Deep Learning with Shallow Dataset,” arXiv preprint 2018.
- “A Vision-Based System for In-Bed Posture Tracking,” ICCV/ACVR 2017.
- “Long-Term Non-Contact Tracking of Caged Rodents,” ICASSP 2017.
(2) Affective Pose Estimation:
- “Facial Expression and Peripheral Physiology Fusion to Decode Individualized Affective Experience,” IJCAI/AffCom 2018.
- “The Emotional Voices Database: Towards Controlling the Emotion Dimension in Voice Generation Systems,” arXiv preprint 2018.
- “Decoding Emotional Experiences through Physiological Signal Processing,” ICASSP 2017.
(3) Environment Pose and Scene Understanding:
- “First-Person Indoor Navigation via Vision-Inertial Data Fusion,” IEEE/ION PLANS 2018.
- “Background Subtraction via Fast Robust Matrix Completion,” ICCV/RSL-CV 2018.
Funding: National Science Foundation (NSF), MathWorks, Amazon Web Services (AWS), INVIDIA.
Skin Cancer Diagnosis
Melanoma is diagnosed in approximately 124,000 people and is responsible for about 10,000 deaths every year, in the USA. Dermatologists rely on visual and dermatoscopic examination to discriminate benign melanocytic lesions from malignant, resulting in high and highly variable benign-to-malignant biopsy ratios from 8:1 to 47:1, and millions of unnecessary biopsies of benign lesions. Reflectance confocal microscopy (RCM) imaging has been proven for noninvasively guiding diagnosis of melanoma in several large clinical studies. RCM imaging at the dermal-epidermal junction (DEJ) provides sensitivity of 92-88% and specificity of 71-84%. The specificity is 2 times superior to that of dermatoscopy. RCM imaging at the DEJ is now being implemented to rule out malignancy, reduce biopsy and guide treatment. However, this is currently at only a few sites, where there are highly trained experts who can ensure that imaging is appropriately performed and images are read correctly. These experts are a small international cohort of “early adopter” clinicians, who have worked with RCM technology during the past decade and have become highly skilled readers. For novice (non-expert) clinicians in the wider cohort who are keen to adopt RCM, learning to read images is challenging and requires substantial effort and time. Two major technical barriers underlie the dramatic variability in diagnostic accuracy among novice clinicians. Together they limit utility, reproducibility and wider adoption of RCM. The first is user dependent subjective variability in depths near the DEJ at which images are acquired, and the second is variability in interpretation of images. We propose to address these barriers with computational “multi-faceted” classification modeling (innovation), image analysis and machine learning algorithms. Our specific aims are: (1) to develop and evaluate algorithms for both dermatoscopic images and RCM depth-stacks, to enable automated standardized and consistent acquisition of RCM mosaics at the DEJ in melanocytic lesions; (2) to develop and evaluate algorithms to discriminate patterns of cellular morphology at the DEJ into two classes, benign lesions versus malignant (dysplastic lesions and melanoma); and (3) to test our algorithms on patients for acquisition of RCM mosaics and classification into those two groups, with statistical validation against pathology, with statistical validation against pathology. Preliminary studies show that our algorithms can delineate the DEJ with accuracy in the range ~3-13 μm in strongly pigmented dark skin and ~5-20 μm in lightly pigmented fair skin, and can detect cellular morphologic patterns with sensitivity in the range 67-80% and specificity 78-99%. Melanocytic lesions can be distinguished from the surrounding normal skin at the DEJ with 80% classification accuracy. Our success will produce standardized imaging and analysis approaches, to advance RCM for noninvasive detection of melanoma. Furthermore, these approaches can be useful for non-melanoma skin cancers, cutaneous lymphoma and other skin disorders (wider impact).
Brain Computer Interfaces
People with severe speech and physical impairments can benefit from a direct brain computer interface for their communication needs. This project aims to develop an AAC interface using noninvasive EEG sensors to infer the user’s intent regarding desired letters and symbols during text generation. The designed RSVP Keyboard system will utilize rapid serial visual presentation of letter sequences coupled with probabilistic and adaptive open vocabulary language models and EEG signal processing and classification algorithms. The designed brain interface relies on event related potentials including the P300 signal. The project design tightly couples feedback from locked-in consultants who will test the system design at regular intervals and provide critical feedback in future design improvements.
Funding: National Science Foundation (IIS-1149570, CNS-1544895, IIS-1715858), Department of Health and Human Services (90RE5017-02-01), and National Institutes of Health (R01DC009834)
ASSIST/iROP
Retinopathy of prematurity (ROP) is a leading cause of childhood visual loss worldwide, and the social burdens of infancy-acquired blindness are enormous. Early diagnosis is critically important for successful treatment, and can prevent most cases of blindness. However, lack of access to expert medical diagnosis and care, especially in rural areas, remains a growing healthcare challenge. In addition, clinical expertise in ROP is lacking, and medical professionals are struggling to meet the increasing need for ROP care. As point-of-care technologies for diagnosis and intervention are rapidly expanding, the potential ability to assess ROP severity from any location with an internet connection and a camera, even without immediate ophthalmologic consultation available, could significantly improve delivery of ROP care by identifying infants who are in most urgent need for referral and treatment. This would dramatically reduce the incidence of blindness without a proportionate increase in the need for human resources, which take many years to develop.
Project Website: i-rop.github.io
Principal Investigators: Jayashree-Kalpathy Cramer (HMS-MGH), Stratis Ioannidis (DNAL), Jennifer Dy (ML), Deniz Erdogmus (CSL), Michael Chiang (OHSU), Kemal Sonmez (OHSU), J. Peter Campbell (OHSU), R.V. Paul Chan (UIC).
Funding: National Institutes of Health (R01EY019474, P30EY10572), National Science Foundation (SCH-1622542; SCH-1622536; SCH-1622679).
Scalable Graph Distances
Representations of real-world phenomena as graphs are ubiquitous, ranging from social and information networks, to technological, biological, chemical, and brain networks. Many graph mining tasks — including clustering, anomaly detection, nearest neighbor, similarity search, pattern recognition, and transfer learning — require a distance measure between graphs to be computed efficiently. The existing distance measures between graphs leave a lot to be desired. They are overwhelmingly based on heuristics. Many do not scale to graphs with millions of nodes; others do not satisfy the metric properties of non-negativity, positive definiteness, symmetry, and triangle inequality. This project studies a formal mathematical foundation covering a family of graph distances that overcome these limitations, focusing on real-world applications in biology and social network analysis. It also provides a universal methodology for parallelizing the computation of graph distance metrics within this family over massive graphs with millions of nodes, and scaling it over cloud computing resources.
Principal Investigators: Stratis Ioannidis (DNAL), Tina Elliasi-Rad (Network Science Institute/CCIS, Northeastern University), Jose Bento (Boston College).
Funding: National Science Foundation, Google Cloud Services (IIS-1741197)
Focus On Cognitive Impairment (FOCI)
This project aims to develop effective methods for early detection of cognitive changes. Using MRI, behavioral measures and our smartphone app, we will develop a computational model that can predict cognitive function from smartphone usage. Outcomes of this project may improve care and research for patients with cognitive impairments.
Background
Cognitive deficits associated with aging and neurodegenerative diseases pose major challenges to healthcare systems throughout the world. To facilitate successful aging, we need effective methods for early detection of neurodegeneration and for monitoring cognitive function. Recent advances in mobile health and artificial intelligence allow for inferring context information from smartphones. In this project, we will create a method for estimating cognitive changes from data collected passively from smartphone use. We will develop and evaluate the method by quantifying how accurately smartphone data predict cognitive changes estimated during lab tests and structural brain changes.
Study design
This is a longitudinal observational study that spans one year. Each participant will visit the lab 3 times: at the start of the study and after 6 and 12 months. During these sessions, we will acquire EEG recordings and participants will perform cognitive and motor tasks. During the first visit, our app will be installed on the participant’s phone. This app will collect phone use data such as what application was opened when, typing speed, time of calls, location information, mode of transportation, walking speed, device and screen status. It will also ask participants to answer very brief questions. To respect privacy, we do not collect data about what was typed on the phone, browsing or search history, received messages, audio, photos or video.
Interested in participating?
We are looking for right-handed volunteers 45-65 years of age, who own and regularly use an Android smartphone. Participants will spend ~8 hours in the lab across 3 visits and will be compensated for their time with up to $500 in gift cards. Sessions take place seven days per week in the Interdisciplinary Science & Engineering Complex (ISEC) of Northeastern University, located at 805 Columbus Ave in Boston. You can find more information about the FOCI research study here. If you are interested in participating, please fill out this online form. If you have questions, you can contact us at foci.study@gmail.com or 617-379-1112.
Human-Robot Object Handover
Coordination of Dyadic Object Handover for Human-Robot Interactions is a project funded by NSF. In collaboration with Tunik and RIVER Labs at Northeastern, we are modeling natural human-to-human object handover dynamics in order to develop robotic behavior strategies for more human-like human-to-robot and robot-to-human object handover in human-robot teams of the future
Estimating Protein Function From Structure
Mining for Mechanistic Information to Predict Protein Function is a project funded by NSF. In collaboration with researchers from the Chemistry Department, we are using machine learning techniques to develop computational models that can predict protein function from chemical and molecular structure. Models will also be explainable in the sense that active residues will be identified and their roles will be connected predicted protein function.
Predicting Epileptogenesis After TBI
Multimodal Signal Analysis and Data Fusion for Post-traumatic Epilepsy Prediction is a project funded by NIH. In collaboration with researchers at USC Medical School, we are using machine learning techniques to discover features from multimodal data such as EEG, fMRI, DTI, and blood chemistry, in order to build models that can predict if a traumatic brain injury (TBI) patient is susceptible to epileptogenesis – emergence of epilepsy following TBI.
Modeling TMS-induced Motor Evoked Potentials
Understanding Motor Cortical Organization Through Engineering Innovation to TMS-Based Brain Mapping is a project funded by NSF. In collaboration with researchers at Tunik Lab and MGH, we are developing hybrid models that combine physics based partial differential equations and deep neural networks to predict transcranial magnetic stimulation (TMS) induced muscle evoked potentials (MEPs) in the upper limbs of humans. These models will then be used to develop inverse motor cortex activation imaging methods to estimate how activations of muscle groups in the arms are represented in the motor cortex. We are also exploring active learning techniques for rapid label efficient modeling in this context
.
Predicting the Onset of Agression in Children with Autism Using Wearable Sensor Data Fusion
Predicting Onset of Aggression in Minimally Verbal Youth with Autism Using Biosensor Data and Machine Learning Algorithms is a project funded by Simons Foundation and US Army. In collaboration with researchers at Maine and UPitt Medical Centers we are developing sensor fusion algorithms to predict upcoming aggressive behavior onset in minimally verbal children with autism. Our algorithms will give caregivers real-time information and cues regarding the mental state of the child they are interacting with
.