Research Lab Technician III/Supervisor
Room: CSC 133
Liew, S.-L., Anglin, J. M., Banks, N. W., Sondag, M., Ito, K. L., Kim, H., Chan, J., Ito, J., Jung, C., Khoshab, N., Lefebvre, S., Nakamura, W., Saldana, D., Schmiesing, A., Tran, C., Vo, D., Ard, T., Heydari, P., Kim, B., Aziz-Zadeh, L., Cramer, S. C., Liu, J., Soekadar, S., Nordvik, J.-E., Westlye, L. T., Wang, J., Winstein, C., Yu, C., Ai, L., Koo, B., Craddock, R. C., Milham, M., Lakich, M., Pienta, A., & Stroud, A. (2018). A large, open source dataset of stroke anatomical brain images and manual lesion segmentations. Scientific Data, 5, 180011. https://doi.org/10.1038/sdata.2018.11 Show abstract
Stroke is the leading cause of adult disability worldwide, with up to two-thirds of individuals experiencing long-term disabilities. Large-scale neuroimaging studies have shown promise in identifying robust biomarkers (e.g., measures of brain structure) of long-term stroke recovery following rehabilitation. However, analyzing large rehabilitation-related datasets is problematic due to barriers in accurate stroke lesion segmentation. Manually-traced lesions are currently the gold standard for lesion segmentation on T1-weighted MRIs, but are labor intensive and require anatomical expertise. While algorithms have been developed to automate this process, the results often lack accuracy. Newer algorithms that employ machine-learning techniques are promising, yet these require large training datasets to optimize performance. Here we present ATLAS (Anatomical Tracings of Lesions After Stroke), an open-source dataset of 304 T1-weighted MRIs with manually segmented lesions and metadata. This large, diverse dataset can be used to train and test lesion segmentation algorithms and provides a standardized dataset for comparing the performance of different segmentation methods. We hope ATLAS release 1.1 will be a useful resource to assess and improve the accuracy of current lesion segmentation methods.
Anglin, J. M., Sugiyama, T., & Liew, S.-L. (2017). Visuomotor adaptation in head-mounted virtual reality versus conventional training. Scientific Reports, 7, 45469. https://doi.org/10.1038/srep45469 Show abstract
Immersive, head-mounted virtual reality (HMD-VR) provides a unique opportunity to understand how changes in sensory environments affect motor learning. However, potential differences in mechanisms of motor learning and adaptation in HMD-VR versus a conventional training (CT) environment have not been extensively explored. Here, we investigated whether adaptation on a visuomotor rotation task in HMD-VR yields similar adaptation effects in CT and whether these effects are achieved through similar mechanisms. Specifically, recent work has shown that visuomotor adaptation may occur via both an implicit, error-based internal model and a more cognitive, explicit strategic component. We sought to measure both overall adaptation and balance between implicit and explicit mechanisms in HMD-VR versus CT. Twenty-four healthy individuals were placed in either HMD-VR or CT and trained on an identical visuomotor adaptation task that measured both implicit and explicit components. Our results showed that the overall timecourse of adaption was similar in both HMD-VR and CT. However, HMD-VR participants utilized a greater cognitive strategy than CT, while CT participants engaged in greater implicit learning. These results suggest that while both conditions produce similar results in overall adaptation, the mechanisms by which visuomotor adaption occurs in HMD-VR appear to be more reliant on cognitive strategies.