Neurologic disorders associated with behavioral symptoms
Jul. 03, 2022
At vero eos et accusamus et iusto odio dignissimos ducimus qui blanditiis praesentium voluptatum deleniti atque corrupti quos dolores et quas.
Digital neurology involves the use of digital technologies in clinical practice as well as in research in the neurosciences. Digital tools include mobile devices and wearable biosensors. The scope of digital neurology includes analysis of data generated via algorithms, neuroimaging, electronic medical records, artificial intelligence, neurorobotics, the use of digital biomarkers for clinical trials, and the development of personalized medicine.
• Digital neurology, as part of digital medicine, involves digitalization of human healthcare data; use of biosensors to track body functions; as well as processing of the vast data generated via algorithms, cloud computing, and artificial intelligence.
• Portable and wearable devices are being used not only for diagnosis, but also for the integration of diagnosis with therapeutics for personalized neurology.
• Artificial intelligence, using algorithms based on computational models of biological neural networks, is integrated in machines such as neurorobots or neuroprosthetic devices for neurorehabilitation.
• Electronic medical records are widely used in healthcare to improve efficacy and reduce costs, but its use in neurology has been limited by requirements of complex diagnostic data.
• Digital biomarkers are used in clinical trials of neurologic disorders.
Digital medicine is defined as the use of digital tools to advance the practice of medicine to high-definition as well as individualized levels and encompasses our ability to digitize human beings using biosensors that not only track our complex physiologic systems, but also process the vast data generated via algorithms, cloud computing, and artificial intelligence (42). An important feature is the development of technological solutions to monitor, process, and integrate different data at the individual and population levels to help address the health problems and challenges faced by patients, clinicians, and healthcare systems. The scope of digital medicine is much broader and includes mobile devices, neuroimaging, electronic medical records, artificial intelligence, robotics, and personalized medicine. Digital technologies are used in pharmaceutical medicine and clinical trials. In 2018, the U.S. Food and Drug Administration (FDA) granted approval to an artificial intelligence–based device to detect certain diabetes-related eye problems. This was the first market approval of a medical device that performs a screening test and provides a referral to a specialist without the need for a clinician to interpret the image or results. Digital neurology includes the application of all these digital tools in the neurosciences.
• Deep learning, a specific form of machine learning subset of artificial intelligence, means that a computer can use continuous learning algorithms to teach itself based on large data sets.
• Interconnections of brains and machines enable neuromodulation and neurorehabilitation.
• SpiNNaker, the fastest brain-mimicking machine, which can simulate brain activity more accurately than any other computer in the world, creates models of the neurons in human brains and simulates more neurons in real time than any other computer.
• The Neurorobotics Platform is a web-based program that provides scientists with a software infrastructure enabling them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experiments.
• Optogenetics is a technique in which genes for light-sensitive proteins are introduced into specific types of neurons to monitor and control their activity.
Numerous technologies have been integrated to develop digital medicine, and these are applicable to digital neurology. A few of them are briefly described in this section.
Artificial intelligence. This term is used for attempts to recreate intelligence by artificial (nonbiological) means using algorithms based on computational models of biological neural networks and their integration in machines such as robots or neuroprosthetic devices. Machine learning is a subset of artificial intelligence. Deep learning is a specific form of machine learning when a computer can use continuous learning algorithms to teach itself based on large data sets and other inputs. Use of artificial intelligence, and the deep-learning subtype in particular, has been enabled by the availability of labeled big data with markedly enhanced computing power and cloud storage, which has an impact at 3 levels: (1) for clinicians, predominantly via rapid, accurate image interpretation; (2) for health systems, by improving workflow and the potential for reducing medical errors; and (3) for patients, by enabling them to process their own data to promote health (44).
Brain-computer interface. This is also referred to as the hybrid brain-computer system. Close interconnections of brains and machines enable neuromodulation as well as sensory and motor function rehabilitation. The combination of biological and artificial intelligence involves 2 major areas of research: (1) control of mechanical, usually prosthetic, devices by conscious biological subjects; and (2) control of animal behavior by stimulating the nervous system electrically or optically. A rat model of a hybrid brain-computer system, the ratbot, exhibits superior learning abilities in a maze learning task, even when its vision and whisker sensation are blocked, which indicates its potential for application in neural rehabilitation (49).
Computer simulation of the brain and machine learning. Machine learning algorithms based on deep neural networks have been used for several complex cognitive tasks, but there is a significant gap between the energy and efficiency of the computational systems that implement these algorithms compared to the human brain. The complexity of the human brain is difficult to simulate. Research is focused not only on designing new artificial intelligence algorithms, device technologies, and integration schemes, but also on overcoming the limitations of conventional computers in these tasks. Spiking neural networks are the third generation of artificial neuron models that simulate the key time-based information encoding and processing aspects of the brain (38). Efforts have been made during the past decade to build supercomputers, such as those built on IBM’s TrueNorth neurosynaptic chip, to achieve these objectives.
In 2018 computer engineers at the University of Manchester launched a new supercomputer with 1 million processors and 1200 interconnected circuit boards as the fastest brain-mimicking machine, which can simulate brain activity more accurately than any other computer in the world. Named Spiking Neural Network Architecture, or SpiNNaker, it creates models of the neurons in human brains, and it simulates more neurons in real time than any other computer, with a capacity to perform 200 quadrillion actions simultaneously. The primary task is the support of models of various regions of the brain, eg, cortex, basal ganglia, or multiple regions that typically function as networks of spiking or firing neurons. SpiNNaker will enable scientists to create detailed brain models to construct a virtual human brain for the European Union's Human Brain Project. It behaves like the brain's neural network and supports a high level of communication among its processors. It has the potential to aid the study of neurologic disorders such as Parkinson disease, but exact simulation of the human brain is not yet possible. The supercomputer can approach only 1% of the human brain’s capability and cannot think for itself. SpiNNaker can, however, control a mobile robot.
Neurorobotics. “Neurorobotics” is the term used for the application of artificial intelligence and robotics in neuroscience. Brain-inspired algorithms may be used for brain-controlled robot systems. Although there are adequate tools for simulating either complex neural networks or robots and their environments, there is a paucity of tools that enable communication between brain and body models. The Neurorobotics Platform is a web-based program that provides scientists with a software infrastructure enabling them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experiments (09). Neurorobots are powerful tools for studying neural function in a holistic fashion and may also enable the development of autonomous systems that have some level of biological intelligence (21).
Optogenetics for neuroscience research. Optogenetics is a technique in which genes for light-sensitive proteins are introduced into specific types of neurons to monitor and control their activity, eg, how they communicate, by using light signals. Light-induced ion transporters have been used extensively for cellular activation, and now light-gated inhibitory channels have been discovered, which are a key tool for elucidating the molecular mechanisms underlying neurologic and neuropsychiatric disorders (19). Light-gated Ca2+-permeant and K+-specific ion channels have been engineered by fusing a bacterial photoactivated adenylyl cyclase to cyclic nucleotide-gated channels with high permeability for Ca2+ or for K+, respectively (01). These synthetic ion channels, when illuminated, activated or inhibited isolated rat neurons. They can be used to switch specific neurons on and off, even in live moving animals, and open possibilities for probing the function of complex neuronal systems in combination with digital technologies.
• Mobile digital devices are used in the practice of neurology to predict, diagnose, monitor, and improve the long-term management of chronic neurologic disorders.
• Machine learning and specifically designed algorithms can identify molecules in the body that are potential targets for developing new drugs.
• Digital biomarkers are being incorporated into clinical trials.
• Electronic health records including videos and imaging are useful for communication between physicians and patients for improving quality of care.
• Brain-computer interface has been used for controlled movements of a paralyzed limb.
• Mobile devices have facilitated teleneurology.
• Digital technologies are facilitating further advances in personalized medicine.
• Digital medicine has introduced new ethical, legal, and regulatory challenges for clinical researchers and institutional review boards, which both are struggling to navigate.
• The FDA has proposed a regulatory framework for modifications to artificial intelligence/machine learning-based software as a medical device.
Applications of digital technologies range from preclinical research in the neurosciences to clinical applications.
Use of mobile devices in the practice of neurology. In conventional neurology practice, important clinical decisions are made on intermittent, brief, and superficial cross-sectional examinations. This gap can be filled by mobile devices. Wearable devices for monitoring various vital functions or laboratory parameters are being used for monitoring patient health continuously outside the setting of a hospital or a physician’s office. Digital phenotyping is defined as the moment-by-moment quantification of social, behavioral, and cognitive biomarkers in situ using data from personal digital devices (33). Current developments and opportunities for using mobile technology to advance research and treatment of the CNS were discussed at a workshop (29). The participants explored innovative approaches to using devices and mobile health technology to predict, diagnose, monitor, assess compliance with, and develop treatments for CNS disorders, including discussion of methodology, analytical techniques, and the evidence needed to validate the data for use in research and clinical practice. Clinical applications of mobile health technologies relevant to diagnosis for neurology include CNS images, EEG, detection of seizures, EMG, and monitoring of intracranial pressure.
Teleneurology. Mobile devices have facilitated teleneurology, which started for remote assessment of several neurologic disorders during COVID-19 pandemic because of restrictions on travel and personal contact. Dementia patients could not visit memory clinics, but this situation has led to new strategies to guarantee adequate care. Teleneurology and digital technology devices, such as smartphones, can be very helpful in remote monitoring and care (04). Technological devices such as videoconference or smartphone apps might be used for follow-up visits and support to patients and caregivers and to acquire digital markers of clinical progression. This is also facilitating the familiarization of neurologists with telemedicine and digital technology approaches.
Teleneurology has to date included emergency department consults done remotely from other areas of the hospital, policy restrictions being relaxed at both a hospital level and a government level, and some states allowing remote practice of medicine across state lines. These changes were adapted quickly, using in many cases the existing infrastructure, and could potentially alter the care and practice of neurology if many of the changes persist beyond the current pandemic (20; 48).
Studies in several specialties have shown that evaluations by telemedicine are not inferior to traditional, in-person evaluations in terms of patient and caregiver satisfaction (13). Evidence reports benefits in expediting care, reducing cost, and improving diagnostic accuracy as well as health outcomes. However, further studies are needed to validate and support the use of teleneurology.
Prevention of errors in the use of medications. The widespread growth of prescriptions and polypharmacy requires the use of information technologies to reduce avoidable medication errors, ie, the correct identification of a prescription medication and detection of drug interactions. A study in the United Kingdom has demonstrated prescription pill identification from mobile images in the National Institute of Health/National Library of Medicine Pill Image Recognition Challenge dataset, which recognizes the correct pill within the top 5 results at 94% accuracy (05). This is an example of seamless integration of artificial intelligence in healthcare.
Applications in pharmaceuticals relevant to neurotherapeutics. Artificial intelligence is being used for drug discovery processes. Machine learning and specifically designed algorithms can identify molecules in the body that are potential drug targets by predicting how they behave under certain conditions.
Use of digital technologies in clinical trials. Digital biomarkers are currently being incorporated into clinical trials primarily as exploratory measures, but they may eventually supplant rating scales and other more subjective measures of function. An app-based study in multiple sclerosis allows patients to log symptoms in real time. Digital traces, ie, records of online activity, are also being explored as digital biomarkers, eg, collection of data for mood analysis from social media interactions or internet searches. Apps are being used to monitor patients suffering from depression for response to their prescribed therapies by tracking their moods, which will enable the personalization of medication for each patient by identifying those who respond to specific therapies. Wearable technology is being tested in indications that involve a sensor-smartphone app combination for movement disorders such as Parkinson disease and a seizure-alerting wristband for epilepsy. An emerging trend in the clinical trial design is the incorporation of digital technologies such as mobile devices, mobile apps, remote monitoring devices, and online social engagement platforms (30). By enabling remote assessments, these technical innovations will spare patients unnecessary visits to the doctor, thereby reducing costs of clinical trials. Moreover, this technology can be used to link clinical outcomes to real-world patient behavior such as compliance, which is not possible to test in a standard clinical trial.
Digital technology also has a place in the post-marketing phase of clinical trials. An example is Abilify MyCite, a digital pill for schizophrenia, which is the first FDA-approved software-based therapy incorporating an embedded sensor. Once the pill is ingested, it releases a signal that is picked up by a MyCite patch placed on the patient’s chest, which in turn transmits a signal to a smartphone app. It meets an unmet need—the monitoring of patient compliance.
Electronic medical records. Electronic medical records are widely used in healthcare and improve efficacy as well as reduce costs. Electronic medical records can improve the quality and safety of patient care by reducing errors in prescriptions. Electronic medical records facilitate clinical trials and the collection of adverse drug events data.
There have been challenges in the adoption of electronic medical records among neurologists because electronic medical records do not always meet the requirements for complex physical examinations for diagnosis and follow-up, neurophysiologic testing, neuroradiologic imaging, the use of pictures and videos in movement disorders or seizures, and patient documentation of episodic complaints such as migraine. Some suggestions to improve electronic medical records in neurology practice include the following (25):
Patient-physician electronic communications
• Use of standardized scales for diagnosis, disability assessment, and outcome
• Incorporation of data from neuroradiology, neurophysiology, and neuropathology
• Portal for videos and pictures
• Tracking of changes in severity of neurologic signs and symptoms
Applications of digital technologies in neurologic disorders. Digital technologies can be used to improve the long-term management of some chronic neurologic disorders. Some examples are given here.
Epilepsy. Technological advances have enabled patient-triggered interventions based on automated monitoring of indicators and risk factors for epilepsy. Replacing traditional follow-up appointments at arbitrary intervals with dynamic interventions, remotely and at the point and place of need, provides a better chance of a substantial reduction in seizures for patients with epilepsy. Properly implemented, electronic platforms can offer new opportunities to provide expert advice and management from first presentation, thus improving outcomes (35).
The Empatica Embrace, a smartwatch developed in the MIT Media Lab, can detect seizures using electrodermal activity as well as accelerometry and notifies emergency. Using a seizure-detection algorithm, Embrace detected 94% of generalized tonic-clonic seizures in pediatric epilepsy patients with less than 1 false alarm per 24-hour period (37).
Alzheimer disease. By using fluorine 18 fluorodeoxyglucose PET of the brain, a deep learning algorithm developed for early prediction of Alzheimer disease achieved 82% specificity at 100% sensitivity, an average of 75.8 months prior to the final diagnosis (07).
Computing technologies and strategically placed sensors have enabled continuous in-home monitoring of Alzheimer disease patients to accurately track activities such as gait and mobility, behavior, social engagement, sleep, and medication adherence over several years. Patterns of intra-individual variations in these areas are used to predict outcomes such as cognitive decline earlier than detection with current tools or periodic physician visits, with the potential to improve the quality of patient care. The widely available digital data on dementia research from various projects such as initiatives like the Global Alzheimer’s Association Interactive Network present both a challenge and an opportunity for improving our ability for early detection of cognitive decline, to model the course of the condition, and help identify individuals who may be most suitable for clinical trials. A proposed model that incorporates features of an independent dementia data trust might provide an approach to facilitate effective and ethical use of such data (27).
Although there are no curative therapies for Alzheimer disease, digital technologies may enable investigators to gain insight from small signals in clinical studies with drug candidates (11). If these tools are shown to be more sensitive for changes in the disease at an earlier stage, it may be possible to design smaller studies for testing multiple mechanisms to determine which drug should be developed further.
Parkinson disease. Gait analysis is used to assess Parkinson disease patients and generally requires video-based motion capture in a specialized laboratory. A wearable inertial measurement unit sensor affixed to shoes results in comparable diagnoses, with gold standard video-based assessments to distinguish gait differences between Parkinson disease patients and unaffected patients, and longitudinal tracking of gait changes in Parkinson disease as well as detection of falls (10). A study on the wearable HealthPatch found that a tri-axial accelerometer worn at any location of a participant's body, and in any orientation, could detect falls with a sensitivity of 99% and specificity of 100%. An artificial intelligence-enhanced smartphone app for continuous monitoring of disease progression in Parkinson disease provides exponentially more data points than a neurologist’s evaluation during a clinic visit.
Multiple sclerosis. Machine learning algorithms have proven effective in differentiating multiple sclerosis from other idiopathic inflammatory demyelinating diseases. For example, a machine learning algorithm could distinguish multiple sclerosis from neuromyelitis optica spectrum disorder in about 80% of cases. In the future, random classifications may be able to predict the conversion of clinically isolated syndrome, or radiologically isolated syndrome, to active multiple sclerosis and its progression to secondary progressive multiple sclerosis.
The SurroGait Rx shoe insole and vest system is used to measure and correct gait in multiple sclerosis patients. As a sensory substitution system, it senses and converts pressure produced on the insole during walking to the vest with a built-in vibrotactile array to re-teach multiple sclerosis patients to walk. Results of a training study on multiple sclerosis patients showed significant improvements in the 29-point Multiple Sclerosis Impact Scale (MSIS-29), no effect on gait variables, and longer postural sway trajectories with the device on versus off (47). Future studies are warranted using individuals with more advanced multiple sclerosis for a better understanding of the potential benefits of SurroGait Rx.
Adherence to medication is important in multiple sclerosis, but 25% to 50% of the patients are not properly compliant with disease-modifying therapies. A pilot observational study was conducted in Germany because of the widespread use of an electronic autoinjector among patients treated with interferon beta-1b to establish a digital study process that enables the collection of medication usage data and a medical app to document injection data, which can be automatically transferred from the autoinjector or entered manually (22). In addition, patients can use the app to document their wellbeing using the “wellness tracker” feature.
Digital technologies have improved assessment of the long-term relative benefits and safety of disease-modifying treatments for multiple sclerosis patients (24). Wearable sensors provide continuous, long-term measures of performance dynamics. Machine learning and artificial intelligence are accelerating understanding the disease, decision making, and self-management by the patients.
Stroke. Predicting infarct size and location is important for decision-making and prognosis in patients with acute stroke. The deep learning model appears to have successfully predicted infarct lesions from baseline imaging without reperfusion information and achieved comparable performance to existing clinical methods (51). Predicting the subacute infarct lesion may help clinicians prepare for decompression treatment and aid in patient selection for neuroprotective clinical trials.
Sleep disorders. Continuous monitoring afforded by wearable sensors enables early diagnosis and improved patient monitoring in sleep disorders (08). Polysomnography, the current gold standard for sleep monitoring, often conflicts with subjective self-reports of sleep and is difficult to monitor longitudinally. In comparison with polysomnography, the Oura ring has a 96% sensitivity to detect sleep, and an agreement of 65%, 51%, and 61% in detecting “light sleep,” “deep sleep,” and REM sleep, respectively; “deep sleep” detected with the Oura ring is negatively correlated with advancing age (06). Wearables can detect obstructive sleep apnea by use of personalized machine-learning algorithms based on similarity to other obstructive sleep apnea patients, with detection accuracies between 90% and 93.6% (50). To explore the use of wearable devices to detect drowsiness, a pilot study used a single-channel electroencephalogram and inertial measurement unit sensor to detect 5 different levels of drowsiness, with an average of 95.2% accuracy (17).
Application of artificial intelligence in fundoscopy. Digital fundus photography combined with an artificial-intelligence algorithm using deep-learning neural networks has been shown to detect papilledema with sensitivity of 96.4% and the specificity of 84.7% and could discriminate it from normal optic discs or discs with other abnormalities (26).
Results of a pilot study show 100% sensitivity and specificity of an offline artificial intelligence system -- a smartphone-based fundus camera -- for detection of referable diabetic retinopathy in diabetic patients from outpatient clinics in remote areas where ophthalmologists are unavailable (28). Costs of the images, taken by a health worker without using a mydriatic, were low.
Neurorehabilitation. A functional near-infrared spectroscopy–based brain-computer interface can be used for control of prosthetic legs and rehabilitation of paraplegic patients to enable locomotion (18). Commands can initiate and stop the gait cycle of the prosthetic leg, and the knee as well as hip torques can be controlled using the proportional derivative computed torque controller to minimize the position error. This approach can be effectively used for neurofeedback training and rehabilitation of lower limb amputees and paralyzed patients.
Brain-computer interface is being used for initiating controlled movements of a paralyzed limb by a patient. A minimally invasive, fully implanted, wireless, ambulatory motor neuroprosthesis using an endovascular stent-electrode array has been used to transmit electrocorticography signals from the motor cortex for control of digital devices on command in 2 participants with flaccid upper limb paralysis (34).
Neurosurgical procedures. Robots are used for mechanical assistance during surgery to simulate virtual procedures, scale the surgeon’s hand motions, and optimize localization of the lesion. These contribute to precision and enable automation, which combines with the advantages of human thinking and planning (43). Robotic arms filter out hand tremor, optimize speed of movement, and enable control of force as well as direction and range of movement (32). Robots are particularly useful for remote control surgery, cranial base operations, stereotactic surgery, radiosurgery, and microsurgery. Artificial intelligence, combined with surgical robotics and other surgical adjuncts such as image guidance, enables more accurate interventions with fewer human errors (36).
A method that combines stimulated Raman histology (a label-free optical imaging method) and deep convolutional neural networks can predict intraoperative brain tumor diagnosis in less than 3 minutes, which is much faster than the 20 to 30 minutes required for conventional techniques. A prospective clinical trial on 278 patients showed that neural network-based diagnosis of simulated Raman images was noninferior to pathologist-based interpretation of conventional histologic images, with overall accuracy of 94.6% versus 93.9% (14).
Prediction of neurosurgical outcome. Machine learning enables computers to make accurate predictions about outcome from previous data. A systematic review has shown that machine learning models predicted outcome in patients undergoing surgery for epilepsy, brain tumor, spinal lesions, neurovascular disease, movement disorders, traumatic brain injury, and hydrocephalus with a median accuracy of 94.5% (41). Some studies demonstrated a better performance in machine learning models compared with established prognostic indices and clinical experts. In a systematic review of studies that used machine learning algorithms for diagnosis, presurgical planning, or outcome prediction in neurosurgical patients, a comparison of clinicians assisted by machine learning models versus clinicians alone demonstrated a better performance in the first group (40). Thus, human-and-machine rather than human-versus-machine model has a better potential to augment the decision making capacity of clinicians in neurosurgical applications. A study of pathology and surgery-related variables (histology, anatomical localization, surgical access) of brain tumor patients using machine learning algorithms enabled better prediction of early (within 24 hours) postoperative complications than conventional statistical methods (46).
Applications of artificial intelligence in psychiatric disorders. Although psychiatry involves uniquely human interactions and emotional intelligence that computers cannot simulate, artificial intelligence can analyze data and detect patterns as well as warning signs that are too subtle to be noticed by humans. Analysis of data with artificial intelligence may enable quicker and more accurate diagnosis to guide treatment but mental health diagnostics have not been quantified well enough to program an algorithm (03). However, artificial intelligence could allow psychiatrists to monitor their patients remotely, alerting them to emotional crises and patients with suicidal intentions. Machine learning can support the integration of biological, psychological, and social factors for predicting, diagnosing, and classifying mild and major neurocognitive impairments, but it has limitations (12).
Application of wireless biosensors in pediatric neurology. Continuous monitoring of cerebral hemodynamics is important in children admitted to neurointensive care units. Conventional hard-wired devices connected to base station are cumbersome and risky, particularly in a restless child. A soft, flexible, miniaturized wireless system has been devised for real-time, continuous monitoring of systemic as well as cerebral hemodynamics and has been validated for practical use in operating hospital environments (39). This system features a multiphotodiode array and pair of light-emitting diodes with ability to measure cerebral oxygenation, heart rate, peripheral oxygenation, and potentially cerebral pulse pressure and vascular tone by using multiwavelength reflectance-mode photoplethysmography and functional near-infrared spectroscopy.
Digital technologies and personalized neurology. Personalized neurology is the prescription of specific therapeutics best suited for an individual taking into consideration both genetic and environmental factors that influence response to therapy (16). The aim is to improve the efficacy and reduce the adverse effects of various therapies. Several biotechnologies are being integrated to develop personalized medicine. Biomarkers and integration of diagnostics with therapeutics are important for the selection and monitoring of treatments. Digital technologies are facilitating further advances in personalized medicine.
Electronic medical records are important for improving healthcare and for widening the scope of personalized medicine as they can be shared online by different physicians and hospitals. Electronic medical records need to be used universally for facilitating the development of personalized medicine. Placing genomic information in electronic medical records would facilitate personalized medicine. If the patient's entire genome were part of his or her medical record, then the complexities of acquiring a DNA sample, shipping it, and performing laboratory work would be replaced by a quick electronic query.
Artificial intelligence can be used to predict effective interventions for an individual patient using a combination of personal historical data and population-level data to optimize treatment type and timing. The Feel wristband enables personalized interventions for wearers according to their mood states based on the neural network model, which can accurately detect 70% to 75% of fluctuations in mood states using heart rate variability and skin conductance (23). A platform for self-regulation of emotions for individuals with autism spectrum disorder uses physiological and movement measurements from a smart watch to analyze patterns of outbursts and sends instructions for real-time, self-regulation exercises as a timely intervention. In a study, smart watch alerts successfully engaged the participant with autism spectrum disorder in the self-regulation exercise and resulted in de-escalation of most of the mild stress episodes and temper tantrums (45).
Ethical and legal issues in digital neurology. Increasing use of individuals’ data traces in novel ways for both research and clinical care challenges the norms of human subjects’ research ethics and existing privacy laws, particularly, by the advent of new digital technologies (National Academies of Sciences, Engineering, and Medicine 2000). Digital medicine has introduced new ethical, legal, and regulatory challenges for clinical researchers and institutional review boards, which both are struggling to navigate. The Connected and Open Research Ethics (CORE) initiative was launched to address these issues and employs a participatory research approach whereby researchers and institutional review board affiliates are involved in identifying the priorities and functionality of a shared resource (02). The main goal of CORE is to develop dynamic and relevant ethical practices to guide research in digital medicine. Security and confidentiality of the digital health data of patients is 1 of the concerns.
Use of robots in surgical procedures raises the question of liability in the event of a mishap. Because a robot cannot be charged, would the surgeons or hospitals have to take responsibility for the technical failures of robots?
There are some limitations in digital neurology, including security, lack of transparency, and ethical issues. It is expected that in the future, there will be considerable advances in accuracy and security as well as resolution of ethical issues.
Regulation of digital technology. The FDA has proposed a regulatory framework for modifications to artificial intelligence/machine learning-based software as a medical device (www.fda.gov). Several artificial intelligence- and machine learning-based software devices have been approved or cleared by the FDA (15). The devices relevant to neurology are:
• Icobrain automatically labels, visualizes, and volumetrically quantifies segmental brain structures from a set of MRIs.
• IDx-DR detects severe diabetic retinopathy in adults diagnosed with diabetes who have not been previously diagnosed with diabetic retinopathy.
• Viz CTP performs image processing, analysis, and display of brain CT perfusion scans.
• NeuralBot is an adjunct ultrasound device for measuring and displaying cerebral blood flow velocity and the detection of transient emboli within the bloodstream.
• BriefCase is a radiological computer-aided triage and notification software for the analysis of nonenhanced head CT images and assists in workflow triage by flagging suspected positive pathological findings in CT images of the head.
• Accipiolx aids in prioritizing clinical assessment of noncontrast head CT of adults in the acute care setting with features suggesting acute intracranial hemorrhage.
The future of digital neurology. From the initial applications for biosensing and analysis of clinical data, applications of digital technologies are increasing for therapeutic applications and will facilitate integration of diagnosis with therapeutics. Most of the diagnostic applications of artificial intelligence relate to image recognition, but prediction of disease may be problematic as such a system may not understand the difference between diagnosis versus prediction. However, these systems have an improved ability to learn the underlying complex nonlinear relationships between independent variables and their dependent outcomes. Current machine-learning research has already demonstrated the ability to predict surgical outcomes, including morbidities, length of stay, and likelihood of survival. Prediction of future symptoms is feasible.
Digital technologies will have a significant impact on the methods of clinical trials for neurologic disorders in the future. Further advances in the application of digital technologies in neurology will require close cooperation between neurologists, engineers, manufacturers of digital devices, and patients. Machine learning in neurology will need neurologists trained in statistics and computer science who can contribute meaningfully to algorithm development and evaluation. Integration of digital technologies and medicine is now feasible, and 21st century neurologists can have the tools they need to process data, make decisions, and master the complexity of 21st century patients.
K K Jain MD†
Dr. Jain was a consultant in neurology and had no relevant financial relationships to disclose.See Profile
Nearly 3,000 illustrations, including video clips of neurologic disorders.
Every article is reviewed by our esteemed Editorial Board for accuracy and currency.
Full spectrum of neurology in 1,200 comprehensive articles.
Jul. 03, 2022
Jun. 20, 2022
Jun. 04, 2022
Jun. 04, 2022
May. 29, 2022
May. 29, 2022
May. 28, 2022
May. 28, 2022