Archives

  • 2018-07
  • 2018-10
  • 2018-11
  • 2019-04
  • 2019-05
  • 2019-06
  • 2019-07
  • 2019-08
  • 2019-09
  • 2019-10
  • 2019-11
  • 2019-12
  • 2020-01
  • 2020-02
  • 2020-03
  • 2020-04
  • 2020-05
  • 2020-06
  • 2020-07
  • 2020-08
  • 2020-09
  • 2020-10
  • 2020-11
  • 2020-12
  • 2021-01
  • 2021-02
  • 2021-03
  • 2021-04
  • 2021-05
  • 2021-06
  • 2021-07
  • 2021-08
  • 2021-09
  • 2021-10
  • 2021-11
  • 2021-12
  • 2022-01
  • 2022-02
  • 2022-03
  • 2022-04
  • 2022-05
  • 2022-06
  • 2022-07
  • 2022-08
  • 2022-09
  • 2022-10
  • 2022-11
  • 2022-12
  • 2023-01
  • 2023-02
  • 2023-03
  • 2023-04
  • 2023-05
  • 2023-06
  • 2023-07
  • 2023-08
  • 2023-09
  • 2023-10
  • 2023-11
  • 2023-12
  • 2024-01
  • 2024-02
  • 2024-03
  • 2024-04
  • 2024-05
  • Our results suggest coherence of temporal encoding across ti

    2018-10-25

    Our results suggest coherence of temporal encoding across timescales. In particular, we demonstrate rapid intertrial neural stability for encoding frequency information from 100 to 400Hz (for speech stimuli presented at 4.35Hz) relates to consistency of beat synchronization (to rates approximating speech syllables, at 1.67 and 2.5Hz; see Table 3). This connection between millisecond-level timing in the auditory midbrain and coordination of motor movements to synchronize at much slower rates may be a function of hierarchical temporal scaffolding, with incredibly fast neural fidelity (i.e., intertrial stability of the FFR for dynamic formant transitions and periodic vowels) acting as temporal subdivisions to support sensorimotor synchrony (i.e., beat synchronization consistency) at slower rates. This finding is coherent with previous work demonstrating concomitance between beat synchronization and low-frequency temporal encoding precision: correlations were observed between beat synchronization consistency and subcortical envelope tracking, but not for broadband stimulus encoding (Woodruff Carr et al., 2014). Together, we suggest the ability to tune in to and exploit slow modulations of spectral information emerges first developmentally, supporting more stable trial-by-trial neural encoding. The FFR is generated by a summation of simultaneous, synchronously- firing neurons throughout subcortical auditory nuclei; therefore intertrial variability of an FFR may result from a number of circumstances: a failure of eighth nerves to synchronize (e.g., auditory neuropathy), greater receptor MM-102 or fatigue, and/or slower recovery from firing (i.e., prolonged refractory periods) (Don et al., 1977; Starr et al., 2003; Schaette et al., 2005). It is difficult to pinpoint the cause of this jitter, but future work using intracranial recordings is necessary to determine its source. If reliable animal models for beat synchronization are discovered (Cook et al., 2013; Hasegawa et al., 2011; Hattori et al., 2013; Large and Gray, 2015; Patel et al., 2009), it may be possible to explore local temporal jitter within inferior colliculus, a primary generator of the FFR, and how this relates to beat synchronization abilities. The ability to consistently perceive and anticipate time intervals in sound streams may explain previously-observed links between auditory-motor synchronization and phonological processing: if input to the auditory system is not coherent from one experience to the next, this could hinder the developmental of a refined phonemic inventory. Increased neural variability would make the process of learning the correct probabilities and statistics of acoustic events challenging, and individuals with poor neural stability could exhibit difficulties in predicting their environment. In the case of autism, individuals with greater neural noise also exhibit heightened sensitivity to details at the consequence of an impaired ability to integrate details into gestalt percepts (Dinstein et al., 2015). Neural instability might be responsible for some of the deficits exhibited by children with language difficulties who struggle to process timing information in speech, through the process of stochastic resonance (McDonnell and Abbott, 2009). Stochastic resonance is a phenomenon where a signal normally too weak to be detected is boosted by noise. This may bias children with autism to focus on details rather than attempt to integrate them, and could also explain a pattern observed in the auditory domain for children with dyslexia. These children with auditory-based learning disorders exhibit an allophonic mode of speech perception, demonstrating higher sensitivity to irrelevant phonemic distinctions (Serniclaes et al., 2004). Supporting this idea, greater variability in auditory-neurophysiological responses elicited by speech have been reported in poor readers (Hornickel and Kraus, 2013; White-Schwoch et al., 2015) and animal models of dyslexia (Centanni et al., 2013).