AOA2021: ACADEMY OF APHASIA 2021
PROGRAM FOR MONDAY, OCTOBER 25TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-10:00 Session 8: Platform session

Neural substrates of language I

09:00
Grammatical parallelism in aphasia revisited: a common lesion substrate for syntactic production and comprehension deficits in the posterior temporal lobe
PRESENTER: William Matchin

ABSTRACT. Introduction

A grammatical parallelism hypothesis in aphasia is commonly espoused: that agrammatism and syntactic comprehension deficits coincide, resulting from common injury to Broca’s area and/or surrounding cortex (Caramazza & Zurif, 1976; Friederici, 2017; Thompson et al., 1997). However, Matchin & Hickok (2020) advocate an alternative hypothesis: that syntactic comprehension deficits coincide with paragrammatism, characterized by the use of complex constructions and functional elements, with syntactic errors rather than overall simplification or reduction (Goodglass, 1993; Kleist, 1914), resulting from common injury to the posterior temporal lobe. Here we test both parallelism hypotheses.

 

Methods

220 people with chronic post-stroke aphasia were assessed with the Western Aphasia Battery-Revised (Kertesz, 2007). Subjects’ lesions were manually drawn on their MRI scans and warped to MNI space (Fridriksson et al., 2018).To assess syntactic comprehension, we combined the Sequential Commands subtest with the Auditory Word Recognition subtest as a covariate. Sequential Commands requires subjects to perform increasingly complex sequences of simple actions (e.g. point with the pen to the book), most of which require syntactic parsing to perform correctly. Auditory Word Recognition requires subjects to point to actual or drawn objects, pieces of furniture, shapes, letters, numbers, colors, and body parts (e.g. point to the cup). A subset of 53 subjects were previously assessed for grammatical production deficits using consensus perceptual ratings by four expert raters of elicited speech samples (Cinderella story protocol from AphasiaBank MacWhinney et al., 2011) (Matchin et al., 2020). Agrammatism and paragrammatism ratings were covaried with words per minute to control for speech rate. We first performed one-tailed non-parametric correlations between each grammatical production measure (incorporating lesion volume as a covariate) and the syntactic comprehension measure for the set of 53 subjects. For each of the 220 patients, we calculated proportion damage within each region of interest (ROI), defined as the lesion distributions associated with agrammatism and paragrammatism (voxel-wise p < 0.01), incorporating lesion volume as a covariate, created using NiiStat (https://www.nitrc.org/projects/niistat/). We then performed one-tailed non-parametric correlations between damage to each ROI and syntactic comprehension scores.

 

Results

Agrammatism was not associated with lower syntactic comprehension, (Kendall’s tau B =  -0.063, p = 0.255), but paragrammatism was significantly associated with lower syntactic comprehension (Kendall’s tau B = -0.329, p = 0.0002749). Damage to the agrammatism ROI was not associated with lower syntactic comprehension (Kendall’s tau B = 0.009, p = 0.577), but damage to the paragrammatism ROI was significantly associated with lower syntactic comprehension (Kendall’s tau B = -0.134, p = 0.002). Overlap analyses (including lesion volume, voxel-wise p < 0.01) showed almost no overlap in lesion distributions between agrammatism and syntactic comprehension, whereas there was significant overlap for paragrammatism and syntactic comprehension in posterior superior temporal gyrus and sulcus.

 

Conclusions

Our results speak against the grammatical parallelism hypothesis rooted in agrammatism and damage to the inferior frontal lobe, and in favor of the grammatical parallelism hypothesis rooted in paragrammatism and damage to the posterior temporal lobe.

09:30
Mapping the arcuate fasciculus with nTMS and action naming: the effect of transitivity
PRESENTER: Effrosyni Ntemou

ABSTRACT. Introduction

Language mapping with navigated Transcranial Magnetic Stimulation (nTMS) is a non-invasive method  used to causally identify cortical areas involved in language processing (Hauck et al., 2015; Ille et al., 2016; Krieg et al., 2017; Picht et al., 2013; Tarapore et al., 2013). The combination of diffusion Magnetic Resonance Imaging (dMRI) and nTMS promises to increase language mapping accuracy by allowing researchers to stimulate cortical terminations of white matter tracts (Reisch et al., in prep). The arcuate fasciculus (AF) is an associative tract with cortical terminations in the frontal, parietal and temporal lobes (Bernard et al., 2019; Catani et al., 2005; Catani & Mesulam, 2008; de Weijer et al., 2015). Cortical regions connected by the AF have been shown to be differentially involved in the processing of transitive and unergative verbs, with transitive verb processing eliciting higher BOLD activation bilaterally (den Ouden et al., 2009; Shetreet et al., 2007; Thompson et al., 2007, 2010). These findings have led authors to suggest that bilateral parietal areas and left temporal areas are involved in argument structure information retrieval and verb/argument integration respectively (Meltzer-Asscher et al., 2015; Thompson & Meltzer-Asscher, 2014; cf. Matchin et al., 2019).

 

Methods

In the present study, we combined dMRI and  nTMS during an action naming task with finite verbs (Ohlerth et al., 2020) to investigate the neural underpinnings of transitive and unergative verbs. After performing fiber tracking of the left and right AF (Fekonja et al., 2019), we identified and stimulated frontal, parietal, and temporal cortical terminations in ~6 adult native speakers of German, according to common protocols for nTMS language mapping (Krieg et al., 2017). Based on previous findings from fMRI studies, we predicted that if verb production is influenced by the number of arguments, nTMS will induce more errors during naming of transitive compared to unergative verbs.

Results

Induced errors were quantified and analysed according to cortical terminations (frontal, temporal, parietal) and verb type (transitive/unergative). For the left AF, preliminary results suggest that nTMS induced more errors with transitive verbs compared to unergative verbs when stimulating temporal terminations. Error rates between the two verb types did not differ during the stimulation of left frontal and parietal terminations. Moreover, no significant differences between transitive and unergative verbs were found during stimulation of the cortical terminations of the right AF.

Conclusions

Preliminary data suggest that suppression of posterior temporal regions leads to increased error rates during the production of transitive verbs in a sentence context. Given the inhibitory nature of our nTMS protocol, we show that posterior temporal regions are causally involved in argument structure processing. In line with previous work (den Ouden et al., 2009; Malyutina & den Ouden, 2017; Matchin et al., 2019; Thompson et al., 2010), we suggest that during action naming posterior temporal regions are necessary for argument structure processing. The present study emphasizes the importance of including verbs with different numbers of arguments during language mapping with nTMS, especially during presurgical mapping of individuals with brain tumors.

10:00-10:15Break
10:15-11:15 Session 9: Keynote

NIH Keynote

10:15
A neural code for speech

ABSTRACT. Speaking is a defining behavior of our species. I will discuss new discoveries on the functional organization and dynamics of neural populations in the ventral sensorimotor cortex that underlie speech articulation.  We have recently mapped out the cortical representations of the human larynx, which appear to revise the classic somatotopic-organized homunculus map.  Related studies demonstrate how motor neural populations encode the coordinated movements of the entire vocal tract during fluent speech.  Finally, I will discuss how these neurobiological findings are being translated into powerful algorithms for a speech neuroprosthesis to restore communication to paralyzed people.

11:15-11:30Break
11:30-12:15 Session 10: Symposium

Symposium: Spotlighting Spoken Discourse in Aphasia

11:30
Spotlighting spoken discourse in aphasia (Symposium)

ABSTRACT. Introduction

Discourse analysis of aphasic speech has been around for many years, because it allows clinicians and researchers to achieve a measure that is reminiscent to everyday communication. Despite this high ecological validity, the field has numerous obstacles to overcome. In this symposium, we highlight work from members of the FOQUSAphasia (FOcusing on QUality of Spoken discourse in Aphasia) working group (Stark et al., 2020). Namely, we discuss the current lay of the land, discourse assessment, and best practices for reporting of spoken discourse research in aphasia. We end with future directions and goals.  

 

Methods and Results

The symposium will consist of the following presentations:

  1. The first talk will present the results of a recent survey of clinicians and researchers from across the globe, highlighting barriers and future directions for spoken discourse analysis in aphasia, and tying this analysis together with prior surveys from different countries (Cruice et al., 2020; Bryant et al., 2017).
  2. The second talk will discuss using virtual, remote platforms to obtain test-retest spoken discourse data from persons with aphasia and those without brain damage, and then will discuss psychometric properties of spoken discourse outcomes (i.e., rater and test-retest reliability) across two studies.
  3. The third talk will highlight discourse outcome measures (e.g., core lexicon, main concept analysis, and a new measure, main concept, sequencing and story grammar; Richardson et al., 2021) and their clinical utility.
  4. The final talk will wrap up the symposium by discussing best practices for systematic reporting of spoken discourse research in aphasia (results of an eDelphi study) and by highlighting next directions for the field.

 

Conclusions

The goals of this symposium are to (1) highlight the usefulness of spoken discourse analysis, whilst also pointing out barriers and future directions to address such barriers, (2) identify discourse outcomes with strong psychometric properties, as well as discourse outcomes measuring a variety of elements about the discourse, and (3) improve on the ability to draw overarching conclusions about spoken discourse analysis and its utility within aphasia assessment and treatment, by highlighting best practices that encourage replicability and reproducibility, as well as enhance the ability to conduct meta-analyses of studies in the field.

11:35
Current practices in spoken discourse analysis in aphasia
PRESENTER: Manaswita Dutta

ABSTRACT. Introduction

            Spoken discourse analysis is commonly employed in the assessment and treatment of people with aphasia (Brady et al., 2016; Bryant et al., 2016). However, there is no standardization in assessment, analysis, or reporting procedures for spoken discourse, thereby precluding comparisons across studies, replication of findings, and the establishment of a set of psychometrically sound and clinically relevant common data elements. An important first step is to identify current practices in acquiring, analyzing, and reporting spoken discourse in aphasia.

 

Methods

            A mixed-methods survey was completed as part of the FOQUSAphasia Best Practices Task Force (Stark et al., 2020) and publicized internationally to researchers and clinicians  who are involved in spoken discourse analysis in aphasia. Data were collected between September-November 2019.

 

Results and Discussion

            Of the 201 individuals who consented to participate, 94% completed all mandatory questions. Compared to prior surveys (Bryant et al., 2017; Cruice et al., 2020), the current sample (see Table 1) included both speech-language pathologists and researchers representing different geographic regions, demographics, and a broad range of backgrounds and experiences (e.g., work settings, years working in aphasia, professional degrees).

Respondents most frequently used discourse analysis to describe aphasia symptoms (72.1%; N=165). Like Bryant et al. (2017), standardized aphasia assessments were most commonly used to collect discourse samples (74.8%, N=163). Most respondents collected 1-2 samples (41.5%), with an average sample length of 1-3 minutes (24%, N=147). Around 78% of respondents recorded samples (audio or video), and of those who did not record, around 60% transcribed in real time. Approximately 70% of respondents (N=133) frequently relied on clinical judgment-based analysis with fewer using computerized transcription systems. Respondents used a variety of raters and training procedures. In line with Bryant et al. (2017) and Cruice et al. (2020), barriers to utilizing discourse analysis across clinical and research settings were common with the most common barrier being time (see Figure 1). Nearly 94% of respondents noted a lack of and need for psychometric properties and normative data on spoken discourse outcomes. Qualitative analyses of open-ended questions confirmed and expounded on these key findings. For example, in addition to time, respondents identified applying discourse protocols, norms, and psychometric properties to multilingual populations as a salient barrier.

 

Conclusions and Future Directions

            This survey identified significant heterogeneity in discourse analysis procedures across clinical and research settings. An important step is the aggregation of pre-existing psychometric data into a single access portal, to overcome issues related to disparate reporting practices of critical data collection and analysis details essential for replication and reproducibility. A second critical step is the creation of and adherence to a set of best practice standards (or common data elements). A focus on psychometric properties and indeed on best practices will overcome some of the challenges inherent to implementation science. Third, time-efficient methods such as automated discourse analysis that increase accuracy and replicability and rely less on training and expertise must be explored further. Finally, there is a need to focus on establishing and validating discourse analysis procedures for multicultural and multilingual populations.

11:55
Determining rater and test-retest reliability of discourse measures in the spoken personal narratives of people with aphasia
PRESENTER: Carla Magdalani

ABSTRACT. Introduction

Discourse interventions are an emerging evidence base in aphasiology [1]. However, reliably eliciting and analysing discourse is challenging for various reasons [2], and reliability is often under-reported, inadequately described, and calculated non-statistically. Notwithstanding these challenges, a stable baseline and high inter-rater reliability must be established before treatment commences to enable judgments on the effectiveness of that treatment, and to minimise errors in interpretation.  In this talk, we will present evidence from two studies, which separately evaluated rater and test-retest reliability of discourse metrics in aphasia.

Methods

NEURAL Research Lab (NRL) study [3]: a virtually conducted study was conducted in 2020, recruiting 25 persons with chronic aphasia (3 excluded for significant missing data) and 24 prospectively matched adults (1 excluded for significantly poorer performance) without brain damage. Each took part in a test and a retest session, taking place 10 +/- 3 days apart, during which they told five narratives [5]. After orthographic transcription, transcripts were coded for a word-level measure, % correct information units (%CIUs).

LUNA study [4]: a virtually conducted study which began in 2020 and recruited 28 participants with chronic aphasia. Participants told and retold two personal narratives, about a week apart, prior to receiving LUNA (i.e., narrative-based) treatment. After orthographic transcription, transcripts were rated using a word-level (%narrative words) and a macrostructure-level measure (story grammar).

In both studies, two raters each analysed 50% of transcripts. 10% (LUNA) and 20% (NRL) of transcripts were randomly selected for rater reliability. Test-retest reliability was calculated for the narratives at the two time points. Intraclass correlation coefficients (ICC) analysed measures at word-level measures (excellent, >0.9; good, 0.75-0.9; moderate, 0.5-0.75; poor, <0.5) and kappa analysed the measure at macrostructure-level (very good, >0.81; good, 0.61-0.8; moderate, 0.41-0.6; fair, 0.21-0.4, poor, <0.2).

Results

Intra-rater reliability at the word-level was excellent for the LUNA study (not computed for NRL). Inter-rater reliability at the word-level was good-to-excellent for both studies. LUNA’s macrostructure-level measure showed good-to-very good intra-rater reliability and moderate inter-rater reliability.

For test-retest reliability, LUNA’s word-level measure showed good-to-excellent reliability, and the macrostructure-level measure showed moderate-to-good reliability. In NRL, when averaging word-level measures across all tasks, test-retest reliability was good-to-excellent for both PWA and NBD groups, though test-retest reliability ranged from poor-to-excellent when evaluated by task.

Discussion

For both studies, rater reliability was high, especially at the word-level. LUNA’s macrostructure measure was analysed reliably within raters but was less reliable between raters and across timepoints. For both studies, word-level variables were found to be highly reliable in the aphasia groups. Notably, the NRL study demonstrated lower reliability for NBD group, and that reliability varied by narrative type. We will further elaborate on these results, especially clinical implications. Both studies were conducted virtually and showed high retention; this will also be discussed in the presentation.

 

12:15-12:30Break
12:30-13:30 Session 11: Symposium

Symposium: Spotlighting Spoken Discourse in Aphasia

12:30
Progress Towards Clinically Practicable Discourse Outcomes

ABSTRACT. Introduction

Navigating everyday conversation in stroke-induced (PWA) or primary progressive aphasia (PwPPA) is best indexed in discourse. Such complex communicative exchanges are high-priority treatment targets identified by PWA (Worrall et al., 2011) and primary outcomes (Ash & Grossman, 2015; Brady et al., 2016). Thus, solving barriers to the clinical feasibility of discourse analyses is essential to ensure real-world implementation. Standardization and normative data have reduced implementation barriers related to the qualitative nature and subjectivity of discourse measurement. However, approximately 80% of practicing SLPs report time as an enduring barrier (Bryant et al., 2016). This presentation reviews checklist-based measures of discourse samples, which reduce or eliminate the need for lengthy/ specialized transcription, saving time.

Methods and Results

Existing micro- and macro-linguistic, checklist-based measures of discourse are picture or story-specific, allowing for standardization. These measures evaluate lexical items (CoreLex), main concepts and sequencing and story grammar elements in PWA and PwPPA.

CoreLex checklists (Dalton et al., 2020) are normed micro-linguistic elements specific to a given story. Credit is given for each lexeme on the checklist, regardless of form, but excluding synonyms. Presence of CoreLex elements is sensitive to age-related changes in healthy controls and differentiates healthy controls from PWAs, as well as aphasia subtypes and fluent vs. nonfluent aphasia (reference). CoreLex performance correlates with other discourse measures and standardized tests, suggesting it may serve as an index of overall language performance.

Main concept analysis (MCA) is a hybrid micro-/macro-linguistic measure of quantity, accuracy, and completeness of discourse (Nicholas & Brookshire, 1993, 1995). MCA checklists based on healthy control transcripts exist for several pictures/stories (e.g., Kong 2009). For MCA, utterances that match the MCs in checklists are scored for accuracy and completeness (e.g., Kong 2009, Richardson & Dalton, 2016; 2020). MCA scores differentiate between healthy controls and PWAs or PwPPA (e.g., Dalton & Richardson, 2019, 2020) and correlate with standardized assessments (Dalton & Richardson, 2019; Kong et al., 2016; Richardson et al., 2018) and functional measures (Armes et al., 2020; Cupit et al., 2010; Doyle et al., 1995; Ross & Wertz, 1999).

Main Concept, Sequencing, and Story Grammar (MSSG) is a multilevel analytic approach complimenting the psychometrics and procedural knowledge of MCA with story grammar component coding and easy-to-obtain sequencing information (Greenslade et al., 2020). MSSG yields scores for MCA, sequencing, MC + sequencing, total episodic components, and episodic complexity. MSSG can generate performance profiles, similar to the Story Goodness Index (Le et al., 2011), mapping participants’ ability to tell accurate, complete, and logically sequenced stories against their production of episodic structure. For all MSSG variables, performance differs between healthy controls and PWAs, and between controls and aphasia subtypes (Richardson et al., 2021).

Conclusions

Normative data, checklists, and freely available training that can be completed at a clinician’s own pace - like that available for CoreLex, MCA, and MSSG analyses - are chipping away at the barriers and improving the clinical feasibility of discourse analysis. Additionally, organizations like FOQUSAphasia are connecting researchers and clinicians, effectively reducing the time needed to translate new research into clinical practice.

12:50
Best practices for reporting of discourse analysis
PRESENTER: Lucy Bryant

ABSTRACT. Introduction: Validity and reproducibility of results in spoken discourse analysis in persons living with aphasia requires expert agreement on measures, methods, and analyses, which cover all aspects of the concept being measured. Minimum reporting standards encourage consistency and efficacy, allowing clinicians and researchers to evaluate the results across studies and reproduce these works.

 

Methods: This study was conducted by members of the FOQUSAphasia (FOcusing on QUality of Spoken discourse in Aphasia) working group (Stark et al., 2020). Experts in aphasia and discourse analysis were identified as the top 165 publishing researchers in this field through Web of Science. Experts were invited via email to contribute their expert opinion of minimum reporting standards for spoken discourse in aphasia using the e-Delphi method, an iterative, three-stage process. At each stage, experts were invited to complete a short online survey to identify expert consensus and agreement on discourse analysis key terms relating to reporting of discourse elicitation, preparation and analysis.

 

Results: In the first eDelphi round, 69 experts responded, providing opinions on the inclusion of 45 reporting criteria relating to key terms, measures, methods, and analyses to interpret study results, to ensure reproducibility of study findings and evaluate the methodological rigor of discourse analysis studies. Agreement baseline was reached on 35 reporting criteria, which were taken to round 2. In Round 2, 49 experts again provided opinion to elaborate on the answers from round 1. Results were analyzed with a stricter baseline of agreement, and 20 reporting criteria were carried forward to round 3 for final consensus among experts.

 

Conclusions: Expert agreement on minimum reporting standards enables reproducible and replicable scientific evaluation of discourse in aphasia, promoting further studies and improving assessment and treatment of persons living with aphasia. A defined set of minimum reporting criteria will enable researchers to create a cohesive body of evidence to support future investigation and clinical implementation of discourse practices, while also allowing the inclusion of additional study-specific reporting.

As reporting standards are used, future research will identify discourse analysis key terms, measures, methods, and analyses where variation exists in research practice for further study. This will facilitate the standardization of discourse analysis procedures and enhance implementation in clinical aphasia services.

13:30-14:00Break & NIH Mentoring session
15:30-16:30 Session 13: Platform session

Executive functioning and working memory

15:30
The White Matter Correlates of Domain-Specific Working Memory
PRESENTER: Autumn Horne

ABSTRACT. Introduction

Prior evidence suggests separable, domain-specific working memory (WM) buffers for maintaining phonological (i.e., speech sound) and semantic (i.e., meaning) information1. The phonological WM buffer’s proposed location is the left supramarginal gyrus2, whereas semantic WM has been related to the left inferior frontal gyrus, middle frontal gyrus, and angular gyrus3–5. Here we investigated the role of white matter tracts connected to these regions in supporting WM. The left AF, previously implicated in verbal WM6, connects the supramarginal gyrus, the proposed location of the phonological store, to frontal regions supporting rehearsal. The IFOF, ILF, MLF, and UF connect temporal regions representing semantics to regions such as the angular gyrus or inferior frontal gyrus which may be involved in maintaining semantics. Thus, we predicted left AF integrity to relate to phonological WM and left IFOF, ILF, MLF, and UF integrity to relate to semantic WM.

 

Methods

For 24 individuals with aphasia following left hemisphere brain damage, behavioral scores were available on single word processing (picture-word matching with phonological and semantic distractors), phonological WM (digit matching span; mean 4.03, sd 1.12), and semantic WM (category probe span; mean 1.73; sd 71). T1 and diffusion weighted (b = 800 sec/mm2) scans were obtained for each participant. Left and right hemisphere tracts of interest were dissected with ROIs drawn manually in native space7. Bivariate correlations between fractional anisotropy (FA) values and behavioral measures were computed. A multiple regression approach was used to test the relationship between FA and WM, while controlling for single word processing ability.

 

Results

The left AF could only be segmented for 7 participants, and thus correlations with behavioral measures were not computed.  For the remaining tracts, segmentation was possible for 13-24 participants. On the left, the only correlations with at least marginal significance were for single word semantic processing and FA values for the MLF and UF. On the right, FA values for the IFOF correlated with single word phonological processing, and FA values for the IFOF, ILF and UF correlated with semantic WM. In the multiple regressions controlling for single word processing, the relations between semantic WM and FA values remained marginally significant for the right ILF and UF (both p’s=.054).

 

Conclusions

We did not observe expected relationships between WM and left hemisphere white matter tract integrity, though others have reported a relationship between left AF integrity and verbal WM6; however, we had a limited ability to segment the left AF. Future work is needed to assess a larger sample of participants and analyze relationships between WM and subsections of the AF8 as only certain subsections of the AF (e.g., the direct segment, directly connecting temporal and frontal regions) may relate to phonological WM. The right ILF, and UF relations to semantic WM were a novel result and suggest possible reorganization to the right hemisphere9. To address these tracts’ role prior to brain damage, we will investigate correlations between integrity of these tracts and WM performance in healthy age-matched individuals.

16:00
Executive functioning white matter structures supporting language recovery in post-stroke aphasia
PRESENTER: Celia Litovsky

ABSTRACT. Introduction

There has been increasing interest in understanding the role of executive processes (e.g., cognitive control, selection, working memory) in the recovery of post-stroke language deficits. Cortical executive function regions have been shown to be recruited for language both in post-stroke aphasia and in healthy subjects performing difficult language tasks [1]. Researchers have also found that executive regions play a major role in recovery from aphasia ‎[2].  However, little research has evaluated whether white matter tracts involved in executive functions support language recovery ‎[3]-‎[4].

Here, we specifically evaluated if white matter integrity (measured by tract volume) of segments of the corpus callosum, bilateral cingulum, and bilateral IFOF predicted a) pre-treatment written and spoken ability and b) treatment effectiveness (for trained and untrained items) in chronic post-stroke participants who received language therapy for sentence processing, naming, or spelling impairments.

 

Methods

Fifty-eight participants (19 female, 58±1.6 years, 59±6.9 months post-stroke) with a single left-hemisphere stroke underwent T1-weighted and diffusion-weighted imaging (b=0, 1500 s/mm2) and completed a three-month behavioral spelling, naming, or sentence processing rehabilitation. Eleven age-matched healthy controls (8 female, 62±3.4 years) underwent the same scanning protocol. Whole-brain deterministic tractography was performed in ExploreDTI ‎[5] using constrained spherical deconvolution ‎[6], and seven white matter tracts (three segments of the corpus callosum, bilateral cingulum, bilateral IFOF) were segmented according to standard protocols ‎[7]-‎[8]. Each tract’s volume was normalized by the participant’s right hemisphere white matter volume to control for between-subject differences in brain size. Normalized tract volumes were entered into stepwise regression models to predict pre-treatment language ability (spelling, naming, and sentence processing), improvement on trained items, and generalization to untrained items.

 

Results

(1) Behavioral measures of executive functions at the pre-treatment time point were significantly associated with pre-treatment spelling (p < .05), spoken naming (p < .01), and sentence processing (p < .05) severity of impairment. However, these behavioral measures were not significantly associated with the magnitude of response to language treatment (all ps > .1). (2) Neural analysis found that executive white matter tract integrity significantly predicted both: pre-treatment language severity [spelling: p < .001, spoken naming: p < .05, sentence processing: p < .01] and response to treatment on trained (p < .05) and untrained (p < .05) items. (3) Specifically, volumes of the genu of the corpus callosum (p < .05) and right cingulum (p < .01) explained significant and unique variance in trained and untrained item improvement. (4) Volumes of the genu and right cingulum were also significantly greater in post-stroke individuals than in age-matched healthy control participants (ps < .001).

 

Conclusions

Volumes of two white matter tracts associated with executive processes (genu and right cingulum) were found to be associated both with severity of language deficits and with extent of improvement following language therapy. These results are consistent with and extend previous evidence indicating relevance of executive brain regions in post-stroke language recovery ‎[2]. We also report, for the first time, evidence of significant post-stroke neuroplasticity (increased WM volume relative to controls) in executive white matter structures.

16:30-16:45Break
16:45-17:45 Session 1: Welcome, and Platform session

Prognosis, prediction and variability

16:45
Biomarkers of neuroplasticity improve predictions of aphasia severity
PRESENTER: Haley Dresang

ABSTRACT. Introduction

Variability in post-stroke aphasia has been attributed to several established factors, like age, lesion size, and time post-stroke (Plowman et al., 2012). However, predicting language recovery remains imprecise. We examine whether genetic biomarkers and electrophysiological indicators of neuroplasticity improve abilities to predict language recovery, measured by aphasia severity (Western Aphasia Battery-Aphasia Quotient [WAB-AQ]; Kertesz, 2007). We specifically investigate whether language recovery predictions are improved by examining interactions between 1) a common genetic polymorphism, the brain-derived neurotropic factor gene (BDNF) and 2) neurophysiological indicators of plasticity – cortical excitability measured through motor-evoked potentials (MEPs) before and after continuous theta burst stimulation (cTBS).

 

Methods

Participants were 19 adults with chronic aphasia subsequent to a left-hemisphere ischemic stroke. We collected MEPs pre- and post-cTBS to primary motor cortex and obtained saliva samples for genotyping. We evaluated the extent to which BDNF Val66Met polymorphism interacted with pre-cTBS cortical excitability (log-transformed MEPs [LnMEPs]), and cTBS-induced MEP-suppression (10 minutes post- minus pre-cTBS LnMEPs) to predict language recovery (WAB-AQ). These predictors were added to established predictors of age at stroke, lesion volume, and log-transformed time post-stroke. We fit a backward stepwise linear regression model with these factors.

 

Results

While controlling for the effects of time post-stroke (β = -0.63, p = 0.002) and total lesion volume (β = -0.10, p < 0.001), BDNF genotype showed a main effect such that when all other factors are average, Val66Val carriers showed better language recovery than Met carriers (β = 22.68, SE = 1.64, t = 13.86, p < 0.001). Furthermore, BDNF genotype interacted with each predictor of interest: age at stroke, baseline MEP, and change in MEP.

First, increased age at stroke was associated with lower WAB-AQ for both groups, but had a stronger effect on language recovery for Val66Val carriers (β = -1.17, p < 0.001) than Val66Met carriers (β = -0.81, p < 0.001). This effect was driven by a significant difference for individuals who were younger (β = -2.99, p < 0.001) but not older at CVA (β= -0.003, p = 1). Second, cortical excitability was positively associated with WAB-AQ for Val66Val carriers (β = 6.48, p < 0.001), but negatively associated with WAB-AQ for Val66Met carriers (β = -8.49, p < 0.001). Third, Val66Met carriers whose language recovered less (i.e. lower WAB-AQ) showed increased paradoxical responses to cTBS (β = -8.29, p < 0.001), whereas cTBS-induced changes in MEP-suppression was not associated with variability in recovery/severity for Val66Val carriers (β = 0.30, p = 0.59).

 

Conclusions

Neurophysiological indicators and genetic biomarkers of neuroplasticity improve ability to predict post-stroke language recovery. The Val66Val genotype is associated with stronger neuroplasticity than Val66Met, so factors like age at stroke had a stronger effect for Val66Val carriers. Furthermore, BDNF genotype interacted with cortical excitability and stimulation-induced plasticity to predict aphasia recovery. These findings provide novel insights into mechanisms of variability in stroke recovery and may improve aphasia prognostics.

17:15
Prediction of post-stroke aphasia treatment outcomes is significantly improved by inclusion of local resting-state fMRI measures
PRESENTER: Robert Wiley

ABSTRACT. Introduction

While the use of neural-based measures for predicting response to treatment in post-stroke aphasia (PSA) is of interest for basic science, its utility for clinical purposes is qualified by the relative difficulty and expense of collecting such measures. Thus, neural measures may be worth collecting only if they contribute unique information toward patient diagnosis or prognosis. Resting-state fMRI (rs-fMRI) is attractive because, compared to other neuroimaging approaches, the data are relatively easy to collect.  Recent work with rs-fMRI (e.g., Iorga et al., 2021; Demarco & Turkeltaub, 2020; Guo et al., 2019) indicates that local rs-fMRI analyses (as opposed to connectivity based approaches) distinguishes between healthy and lesioned tissues and indexes domain-specific language deficits. However, it remains an open question as to whether such measures contribute unique information for the purposes of predicting response to treatment, above and beyond what can be predicted on the basis of demographic, behavioral, or simple structural MRI (lesion volume) measures.

 

Methods

64 individuals with PSA subsequent to a single left-hemisphere stroke were treated for deficits in naming (n = 28), spelling (n = 22), or syntax (n = 14), and completed rs-fMRI scans prior to beginning treatment. Response to treatment was measured as percentage of maximum gain from pre-to-post assessments on trained items. The rs-fMRI data were used to measures the fractional Amplitude of Low Frequency Fluctuations (fALFF; Zou et al., 2008), which was normalized within participants across the 96 anatomical gray-matter parcels of the Harvard-Oxford Atlas (Desikan et al, 2006).

Response to treatment was first predicted using the best set of demographic and behavioral measures (determined by exhaustive search through all available variables, e.g., pre-treatment accuracy, age, sex, etc.) and prediction accuracy was assessed with cross-validation. The process was repeated including neural measures (fALFF and lesion volume), with the best set of neural measures selected via elastic net regression (Zou & Hastie, 2005). The difference in the precision and 80% prediction intervals of the two sets of models (demographic/behavioral only versus including neural measures) were statistically assessed using Monte Carlo analysis.

 

Results

The median absolute error (MAE) and width of the 80% prediction interval were significantly improved when including fALFF measures for all three language domains (see Figure). MAEs for predictions based on behavioral/demographic measures ranged from 11-17% across the language domains, improving to just 1-3% when including neural measures (p’s < 0.05). Similarly, 80% prediction intervals around the estimated gains on treated items narrowed from ± 22-32% to ± 4-6% (p’s < 0.05), indicating that not only are predictions more precise when including fALFF, they also express more certainty.

 

Conclusions

These results are the first to statistically assess whether local rs-fMRI measures (fALFF) improve predictions of treatment outcomes in aphasia beyond demographic and behavioral measures. For all three language domains tested (naming, spelling, and syntax), the addition of fALFF measures from anatomical cortical regions significantly improves precision, and provides narrower prediction intervals. Monte Carlo procedures demonstrate that these improvements are not attributable to random chance or “over-fitting” due to including additional predictors.

17:45-18:45 Session 14: Membership Meeting

Membership meeting

All attendees are welcome, but only members of the Academy of Aphasia vote

17:45
Members meeting
Session 3 (permanent): Poster session

Sunday, 12.30pm-2pm: Predictors of Recovery; Assessment and Diagnostics; Treatment

AoA2021: Conference overview and navigation

ABSTRACT. Conference overview for participants

The adaptation and standardization of the Catalan version of the Comprehensive Aphasia Test (CAT-CAT): a preliminary study
PRESENTER: Io Salmons

ABSTRACT. Introduction

Very few studies have examined the linguistic abilities of Catalan-speaking aphasics with the purpose to develop an assessment tool. The diagnosis of aphasia in Catalan speakers is mostly based on clinical observations or through assessment tests in their second language. The goal of the present study is to present the adaptation of the Catalan version of the Comprehensive Aphasia Test (CAT; Swinburn et al., 2005; Howard et al., 2010) and the preliminary results of the ongoing standardization process.

 

The CAT is a comprehensive test that evaluates the cognitive and linguistic abilities of aphasic subjects. It is being adapted to several languages (Fyndanis et al., 2017) by investigators from the Collaboration of Aphasia Trialists project (the Tavistock Trust for Aphasia). Cultural, linguistic and psychometric properties such as frequency or imageability (Rofes et al., 2018) have been controlled during the development of the CAT-CAT.

 

Methods

Here we present the preliminary data on the cognitive and linguistic parts of the CAT-CAT from 43 healthy participants (age range: 20-80 years old; 27 women and 16 men; 40 right-handed and 3 left-handed) and 9 people with different subtypes of aphasia and etiologies (age range: 45-78 years old, 6 women and 3 men, all right-handed, time post-onset: 5-11 years). All the participants were native speakers of Catalan.

 

The test consists of 6 cognitive tasks to evaluate the possibility of associated cognitive deficits, such as apraxia or acalculia, and 21 tasks with the purpose to assess the following linguistic behaviors: comprehension, production, repetition, naming, reading and writing.

 

Results

The descriptive results showed that the performance of healthy individuals on the cognitive and linguistic parts was at ceiling. The performance of aphasic subjects on the cognitive tasks was slightly better than their performance on the linguistic tasks, but worse than the healthy subjects’ performance (specially on the verbal fluency and recognition memory tasks). Regarding the linguistic evaluation, meaningful differences could be observed between the performance of both groups on different tasks, more specifically, on the subtests that assess the comprehension of sentences, the verbal digit span, the repetition of sentences, the ability to name objects both orally and in writing, the written and oral description of a picture, and those that involved reading and repeating pseudowords.

 

 

Conclusions

The preliminary results suggest that the CAT-CAT was sensitive to the linguistic deficit of the Catalan-speaking aphasic subjects that participated in the study and, hence, that it can be a useful tool to assess their language skills in their native language. We are now broadening our sample to conduct the statistical analysis to determine its reliability as a diagnostic tool.

The adaptation of the Cantonese version of Comprehensive Aphasia Test (Cant-CAT) for speakers with aphasia in Hong Kong: A pilot investigation
PRESENTER: Yee Ting Ng

ABSTRACT. Introduction

The Comprehensive Aphasia Test (CAT; Swinburn, Porter, & Howard, 2004) is an extensive standardized and formal battery designed to evaluate linguistic and cognitive impairments as well as psychosocial deficits among people with aphasia (PWA). It has been widely used by clinicians in western countries to estimate the impact of aphasia on PWA’s quality of life and to monitor the treatment recovery and outcome overtime (Howard et al., 2010). A recent report by Fyndanis et al. (2017) summarized that CAT had been adapted into 15 languages, including the Indo-European languages of Basque, Catalan, Croatian, Cypriot Greek, French, (Standard Modern) Greek, Hungarian, Norwegian, Serbian, Spanish, Swedish, Turkish, Danish (Swinburn, Porter & Howard, 2014), Dutch (Visch-Brink, Vandenborre, de Smet, & Mariën, 2014), and the Semitic language of Arabic (Abou El-Ella et al., 2013). At present, there are no reports of any formal adaptation of CAT into any Asian languages.

The Cantonese version of the Western Aphasia Battery (CAB; Yiu, 1992) has been the most popular aphasia battery in Hong Kong since the 1990s. It was only until very recently that other assessment tools in Cantonese become clinically available (see Kong, 2017). The aim of this study was to explore the development and adaptation of a Cantonese version of CAT (i.e., Cant-CAT) for Chinese PWA speakers in Hong Kong. Specifically, modifications of test items involved careful considerations of the unique linguistic properties (e.g., word length, sentence structure) and psychometric variables (e.g., frequency, imageability, regularity) of Cantonese as well as appropriate Chinese culture. 

 

Methods and preliminary results

The adaptation process was divided into two phases. In Phase 1, original test items in each CAT subtest were translated into Chinese and modified with careful control of the psycholinguistic variables specific to Cantonese (see examples in Table 1). Each item that was inappropriate for the Cantonese-speaking PWA in Hong Kong was replaced by up to three proposed possible alternatives.

Phase 2 (now in progress) involves piloting the preliminary version of Cant-CAT (i.e., with new items proposed in Phase 1) among eight healthy middle-aged (45-65 years) native Cantonese speakers in Hong Kong. These control results will be analyzed to determine if further changes of test items are needed; the best alternative for each replacement item will also be selected to be adopted to the final Cant-CAT, which will then be administered in nine (including three mild, three moderate, and three severe) native Cantonese-speaking PWA. The concurrent validity will be established by correlating subtest scores of PWAs’ Cant-CAT and CAB. In addition, the inter- and intra-rater reliability will be estimated.

 

Conclusion

It is expected that the final deliverables of this investigation will lead to three important implications. First, a new and more comprehensive formal assessment of aphasia will become available for clinicians who work with Cantonese-speaking PWA. Second, with further validation, the Cant-CAT can provide clinicians with a comprehensive profile useful for diagnosing aphasia and treatment planning in PWA. Finally, this investigation can offer directions for future CAT adaptation in other Asian languages, such as Mandarin Chinese.

Between-session intraindividual variability in phonological, lexical, and semantic processing in post-stroke aphasia: A pilot study
PRESENTER: Lilla Zakariás

ABSTRACT. Introduction

Neurolinguistics and cognitive neuropsychology have a long-standing tradition to focus on mean-level performance measures such as accuracy and mean reaction times. However, several recent neuropsychological studies suggest that intraindividual variability (IIV) – within-person variations in performance over time – may characterize behavior better than mean performance (Hultsch et al., 2008). Despite the common clinical observation that people with aphasia often produce marked variations in their day-to-day performance on a variety of tasks, only a few empirical studies have investigated IIV in aphasia (Duncan et al., 2016; Laures, 2005; Naranjo et al., 2018; Stark et al., 2016; Villard & Kiran, 2015, 2018), and to our best knowledge, no study has systematically investigated IIV in language processing in post-stroke aphasia. The aims of the current study were to investigate (1) IIV in language processing (i.e., phonological, lexical, and semantic processing) across days, and (2) the relationship between IIV in language processing and mean accuracy and standardized measures of language in post-stroke aphasia.

Methods

Thirteen people with post-stroke aphasia (5 female, mean age = 61.23 years, mean post-onset = 1.81 years) participated in the study. Participants were assessed on four different days (mean time between session 1 and 4 = 5.38 days) using the same set of six auditory experiments on each day. The experiments tested phonological, lexical, and semantic processing with and without WM demand (henceforth: 1. PHON, 2. PHON-WM, 3. LEX, 4. LEX-WM, 5. SEM, 6. SEM-WM, respectively; for details on task procedures, instructions, and stimuli, see Table 1). In addition, the Western Aphasia Battery (WAB) and the Comprehensive Aphasia Test-Hungarian (CAT-H) were administered to assess participants’ language profile and aphasia severity. To examine IIV, we calculated two coefficients of variation (COV) for each task – one for accuracy (ACC-COV) and one for reaction times (RT-COV). We investigated the associations between COV indices and mean accuracy across tasks, and the WAB and the CAT-H using Pearson’s correlations.

Results

ACC-COV in PHON showed a significant negative correlation with the CAT-H (r = –0.71, p = 0.01) and a marginally significant correlation with the WAB AQ (r = –0.50, p = 0.08). ACC-COV in SEM showed a significant negative correlation with the CAT-H (r = –0.65, p = 0.02) and the WAB (r = –0.69, p = 0.01). We observed mostly negative but non-significant correlations between all other ACC-COVs and standard measures of language. We found strong negative correlations between the ACC-COV and the mean accuracy in PHON, LEX, and SEM (r = –0.74 to –0.79), p < 0.01). RT-COV in PHON and LEX showed non-significant positive correlations with the CAT-H and the WAB (r = 0.39–0.57, p = ns.).

Conclusions

People with post-stroke aphasia show IIV in language processing across days. Greater IIV in accuracy may be associated with more severe aphasia and lower mean performance in post-stroke aphasia. IIV in accuracy and RTs may be driven by different underlying mechanisms.

Picture Naming and Word Finding: How Well Are Images Controlled in Lexical Retrieval Studies?

ABSTRACT. Introduction

Image naming paradigms have been broadly utilized to better understand the linguistic processes underpinning theories of lexical retrieval and to explore linguistic deficits in people with aphasia. However, the image naming process is not equivalent to the process of word retrieval, as image-related characteristics need to be encoded prior to linguistic characteristics associated with lexical and sublexical processes.  In order to name an object or image, it is necessary to first recognize the image (Humphreys, Price, & Riddoch 1999; Johnson, Paivio, & Clark, 1996). Physical image characteristics affect object recognition and subsequent naming performance. For example, more surface detail in images is associated with faster and more accurate recognition, in studies that compared recognition of line drawings to photographs (Brodie, Wallace & Sharrat, 1991; Heuer, 2016). In summary, physical stimulus characteristics can bias image naming by influencing prelinguistic encoding processes of image recognition. Thus, it is conceivable that the images themselves contribute to the effort of generating the name of represented concept. Therefore, having control over physical as well as psycholinguistic stimulus properties of images to be named is critical when using image sets in behavioral experiments. 

Purpose of this study was to determine how well image characteristics were controlled in tDCS studies that explored image naming in people with aphasia.

Methods

A systematic literature review was conducted following the PRISMA guidelines for systematic reviews (Moher et al., 2009). The data bases Psych Info, CINAHL Plus and Medline were searched with the terms: Transcranial direct current stimulation OR tdcs AND image naming OR picture naming OR word finding OR anomia AND aphasia. Fifty articles were included in the final review after abstract and title screenings. Information about visual and linguistic stimulus characteristics were collected.

 

Results

Of the final 50 studies reviewed, 22 did not report any source or characteristics for the images used. The remaining 23 studies reportedly used either image naming tests for aphasia or image sets that are controlled and normed for characteristics such as name agreement and visual complexity. See Table 1 for a list of aphasia tests and normed image sets. Six studies did not report any linguistic characteristics of visual stimuli used.

 

Conclusions

Linguistic characteristics were more often reported compared to the physical image characteristics of visual stimuli (44 vs 22 studies). A limited selection of well-known image sets and tests were predominantly used across studies that reported on image characteristics. Importantly, published norms such as name and image agreement for many images sets are based on young adults while people with aphasia tend to be older. Further, there are concerns regarding ecological validity and cultural sensitivity of older tests and image sets such as the BNT (Bernstein-Ellis, Higby & Gravier, 2021) and the Snodgrass and Vanderwart (1980) images (Viggiano et al., 2004). In summary, this work highlights the need for better control of physical stimulus characteristics of pictures used in naming studies with people with aphasia and the need for a corpus of newer images normed on older adults.

Is awake brain surgery in glioblastoma patients with severe aphasia feasible? Four case reports

ABSTRACT. Introduction

Glioblastomas (GBM) are malignant primary brain tumors associated with a limited median survival. Traditionally, surgical treatment is performed under general anesthesia but some recent studies revealed that awake surgery in high-grade glioma (HGG) resulted in better outcomes (Gerritsen, Arends, Klimek, Dirven, & Vincent). However, as severe aphasia is common in HGG-patients (Noll, Sullaway, Ziu, Weinberg, & Wefel), the intraoperative distinction between pre-existent aphasia and direct electrical stimulation (DES) or surgery induced paraphasias becomes a challenge.

 

Methods

We present four cases (A1, B2, C3 and D4) elected for awake surgery with GBM in eloquent language areas (frontal, temporal and/or parietal lobe) and with severe aphasia. Pre- and postoperatively, an extensive test-protocol was administered at different linguistic levels (phonology, semantics and syntax) and modalities (comprehension, production and reading). Intraoperative language tasks for DES and resection were selected from the Dutch Linguistic Intraoperative Protocol (De Witte et al.) and adapted to patients’ preoperative level.

 

Results

Preoperatively all patients had severely impaired scores (z≤-2.00) on TT, BNT, verbal fluency and DIMA Sentence Completion (A1, D4). DIMA Repetition was mildly (A1) to severely impaired (C3, D4). Repetition was only screened in B2 (raw score 12/15). DIMA Semantic odd-picture-out was mildly (A1) to severely impaired (C3, D4), but feasible in C3 presented without time constraints (odd-picture/word-out) and via the graphemic input route. For intraoperative monitoring, DuLIP-tasks were simplified by selecting high-frequency words, diminished phonological complexity and/or presentation via dual input routes (auditory and visually). Functional boundaries were successfully detected by occurrence of new paraphasias, neologisms or perseverations. Postoperatively, there was full recovery from a severe aphasia (all tests z>-1.50, apart from letter fluency z=-1.50) in A1. Although B2 and C3  improved on TT (Δz≥1.50), they remained severely impaired (z≤-2.00). BNT recovered to normal scores in C3 (z>-1.50). Category and letter fluency  remained severely impaired (z≤-2.00) in B2 and C3 although administration of Letter fluency was now possible in B2. DIMA Repetition deteriorated in C3 (administration was not possible anymore). The ABC was below the cut-off score, with errors in comprehension (B2) and production (B2, D4). C3 remained stable on semantic-odd/picture out tasks (without time constraints), sentence completion recovered (z≥-1.5).   

 

Conclusions

We demonstrated for the first time that awake surgery in severely aphasic GBM-patients was well feasible without further deterioration of aphasia. Almost full recovery was present in A1 and naming recovered in C3. The degree of postoperative improvement could be influenced by preoperative aphasia severity including the level of phonology (repetition) as shown in earlier studies (El Hachioui et al., 2013; Sierpowska et al.).

 

For adequate intraoperative monitoring of severely aphasic patients, extensive preoperative neurolinguistic examination of different in- and output routes (i.e. auditory, visual, graphemic) is necessary, including an error analysis. Subsequently, the linguist can intraoperatively focus on the intact linguistic levels and modalities thereby facilitating reliable interpretation of further language deterioration during DES and surgery. As this only concerns case-descriptions, the added value of awake surgery in GBM remains to be demonstrated with an RCT (Gerritsen et al.).

Distinguishing between phonological output buffer deficit and apraxia of speech: Error analysis to the rescue
PRESENTER: Naama Friedmann

ABSTRACT. Introduction

When a patient produces phoneme errors, it is challenging to decide whether they result from apraxia of speech (AOS) or from a phonological output buffer (POB) deficit (Haley et al., 2013). 

Studies found that individuals with POB deficits produce words (and nonwords) with phoneme errors but substitute/omit/add whole morphological affixes, number words, and function words, so that they may substitute a whole unit with another whole unit of the same kind. Based on this, Dotan & Friedmann (2015) suggested that beyond phonemes, the POB also holds pre-assembled morphological affixes, whole number words, and function words. We examined whether this phenomenon could serve as a basis for a differential diagnosis between POB deficits and AOS. We surmised that in affixes, number words, and function words, individuals with POB will mainly make errors at the whole unit level (seven-four), whereas individuals with AOS will produce phoneme errors that affect a phoneme within the unit, and may not create another existing unit (seven-sevel). The POB immediately follows the phonological output lexicon in the lexical processing, so it may still enjoy lexical feedback, whereas AOS affects later stages. We therefore examined whether individuals with POB deficits show advantage for the production of existing words in comparison to nonwords whereas individuals with AOS show similar production of words and pseudowords. 

 

Methods

The participants were 7 individuals who produced phonological errors in spontaneous speech, repetition, naming, and reading aloud. Their production of nonwords, morphologically simple and complex words, number and function words was tested in tasks of repetition, oral reading, and naming. 

 

Results  

Both groups produced phonological errors in the root phonemes. However, the 4 POB-impaired individuals substitute or omit whole morphological affixes, whole number words, and function words; the 3 individuals with AOS made phoneme errors even within affixes and number words, so they substitute or omit phonemes even when they are parts of affixes and number words. Furthermore, individuals with POB deficit showed better production of words compared to pseudowords, whereas individuals with AOS showed no lexicality effect. 

 

This study offers a novel way to distinguish between AOS and a POB-deficit.

The importance of verbs in diagnosing aphasia
PRESENTER: Dörte de Kok

ABSTRACT. Introduction

In clinical practice, standardized tests are used to assess the presence of aphasia. Verbs often play a minor role in these tests. Their role in language, however, is essential and they are known to be more difficult to retrieve for people with aphasia (PWA) (Bastiaanse & Jonkers, 1998; Mätzig, Druks, Masterson, & Vigliocco, 2009). Therefore, milder forms of aphasia might be missed in the diagnostic process.

In the current study, we investigate whether a group of people with brain-damage, but no diagnosed aphasia (BDnoA) shows specific problems in verb retrieval and if so, whether similar psycholinguistic variables drive the performance as is the case in aphasia, e.g., age of acquisition (AoA) and imageability (Bastiaanse, Wieling, & Wolthuis, 2016).

 

Methods

Data were collected during the normative study for the VAST-app (de Kok, Wolthuis, & Bastiaanse, 2018) and consist of the outcomes of object naming (ON) and action naming (AN) tasks for non-brain-damaged speakers (NBD, n = 61), PWA (n = 48) and BDnoA (n = 12). Initial diagnoses were based on the Dutch version of the AAT (Graetz, De Bleser, & Willmes, 1992).

 

Results

A Kruskall-Wallis Anova revealed significant differences between the groups for ON, H(2) = 53.5, p < .001. Bonferroni-corrected Dunn tests showed that PWA were less accurate than NBD (p < .001) and BDnoA (p = .012). BDnoA did not differ from NBD (p = .387). For AN, groups also differed (Kruskall-Wallis Anova, H(2) = 58.8, p < .001). Bonferroni-corrected Dunn tests showed that PWA scored worse than NBD (p < .001) and tended to be less accurate than BDnoA (p = .081). BDnoA scored worse than NBD (p = .047). A forward stepwise logistic regression investigating AN in the BDnoA group resulted in a model including predictors ‘AoA’ and ‘imageability’ but not ‘frequency’ or ‘length’, c2(2) = 21.31, p < .001. Both included predictors were significant.

 

Conclusions

A group of people with brain-damage but no diagnosed aphasia was tested with object and action naming tasks. While results confirmed that they did not perform worse than NBD in ON, it turned out that their performance in AN is worse. The deficits in verb retrieval, albeit small, are clearly visible. Common diagnostic batteries, such as the AAT used in this study (Graetz, De Bleser, & Willmes, 1992), might miss these mild language deficits as verbs are not included or only play a minor role in the assessment.

The performance of the BD group was driven by age of acquisition and imageability, in line with findings of Bastiaanse, Wieling, and Wolthuis (2016) for PWA. It thus seems that the underlying deficit is comparable and that BDnoA present with a mild form of aphasia. With the current data we can, however, not exclude other, not language-specific causes for the verb retrieval deficit with certainty. Nonetheless, in order not to overlook mild cases, verb retrieval should play a more prominent role in the assessment process.

Effects of adaptive distributed practice and stimuli variability in flashcard-based anomia treatment
PRESENTER: William Evans

ABSTRACT. Introduction

There is a need to improve the treatment efficiency for people with aphasia (PwA). The current study investigated two promising treatment components, adaptive distributed practice and stimuli variability, which are hypothesized to promote learning, retention, and stimulus generalization in anomia treatment.

Distributed practice improves the long-term retention of naming practice in PwA (Middleton et al., 2020). Adaptive distributed practice (Settles & Meeder, 2016) may better maintain desirable difficulty (Bjork & Bjork, 2011) and improve treatment efficiency by reviewing easily-learned words less frequently over time, thereby allowing more total words to be practiced per a given number of trials. Therefore, the current study examined whether computer-based flashcard software using adaptive distributed retrieval could successfully train more words than are typically targeted in anomia treatments (e.g., ≤ 40 words, Snell et al., 2010).

If adaptive distributed practice can improve the efficiency of directly training, it is important to ensure this training generalizes beyond the treatment context (i.e., stimulus generalization, Thompson, 1989). The developmental literature has shown stimuli variability helps improve the retention and generalization of new vocabulary (Aguilar et al., 2018). However, anomia treatments for PwA often rely on training a single picture exemplar, potentially overtraining one stimulus-response mapping at the cost of stimulus generalization. Therefore, the current study also examined whether varying the prompt type (description vs. picture) and the number of trained exemplars would facilitate stimulus generalization in an easily-measured ‘proof of concept’ transfer context: untrained picture exemplars of trained words.

 

Methods

Two participants with post-stroke aphasia completed an effortful retrieval adaptive distributed practice naming intervention using Anki in a single-subject multiple baseline design. Naming probes consisted of 40 untrained and 120 trained words balanced across three stimuli conditions: low vs. high picture variability (one vs. three trained pictures for each target word) and written/auditory verbal description. One trained and one untrained picture exemplar was probed for each trained word. Participants were taught to use Anki during one-on-one sessions 2x/week for two weeks, followed by daily independent practice and one-on-one treatment 1x/week for ten weeks. Naming performance was assessed via three baseline probes, weekly treatment probes, and follow-up probes at one, four, and twelve weeks post-treatment. Statistical comparisons and effect sizes were estimated using Bayesian generalized mixed-effect models (Bürkner, 2017).

 

Results

Compared to direct training effects in previous anomia treatments (e.g., Quique et al., 2019), participants showed excellent acquisition and retention three months post-treatment for both trained and untrained picture exemplars. Effects of stimuli variability and type were not reliably different from zero.

 

Conclusions

These case studies suggest that combining effortful retrieval and adaptive distributed practice is a highly effective way to re-train more words than can typically be targeted during anomia treatment. The treatment resulted in stimulus generalization across conditions, indicating improved lexical access beyond what could be attributed to simple stimulus-response mapping.  Finally, this promising treatment relies on freely available open-source flashcard software and asynchronous telepractice (Cherney et al., 2011), making it highly feasible for real-world implementation in limited treatment contexts.

Diagnostic Instrument for Mild Aphasia (DIMA): sensitive and valuable addition to standard language assessment in glioma patients
PRESENTER: Saskia Mooijman

ABSTRACT. Introduction

Low-grade glioma (LGG) patients typically suffer from milder aphasia than high-grade glioma (HGG) or stroke patients. Therefore, their linguistic impairments often cannot be detected with standard aphasia tests (e.g., Satoer et al., 2013). The Diagnostic Instrument for Mild Aphasia (DIMA) is the first standardized test-battery to assess mild language disorders on different linguistic levels. We investigate pre- and postoperative linguistic abilities of LGG and HGG patients with the DIMA.

 

Methods

The DIMA consists of subtests that tap phonology (word, compound, non-word, sentence repetition), semantics (odd-picture-out), and syntax (sentence completion). Additionally, we administered the Boston Naming Test, Category and Letter Fluency, and the Token Test. Patients were assessed before awake surgery (T1, N=98), three-months (T2, N=69), and one-year (T3, N=30) postoperatively. DIMA performance was compared to healthy controls (N=214). Group differences were examined with parametric (t-test) and nonparametric (Mann-Whitney-U, Wilcoxon) tests.

 

Results

DIMA: Preoperatively, patients deviated on compound and sentence repetition, semantic odd-picture out, and sentence completion (p<0.05). HGG patients performed worse than LGG on word, non-word, and sentence repetition (p<0.05). There was no effect of hemispheric tumor localization. At T2, non-word repetition also became impaired (p<0.05) and there was a decline compared to T1 on all repetition tasks (p<0.05). At T3, there was a deterioration compared to T1 on sentence completion (p<0.01).

Standard tests: At T1, patients were impaired on BNT, Category- and Letter Fluency (p<0.01). HGG patients performed worse than LGG patients on BNT and TT (p<0.01). Patients with left-hemispheric tumors performed worse on BNT and Letter Fluency compared to patients with right-hemispheric tumors (p<0.05). At T2, patients declined compared to T1 on Letter Fluency (p<0.01). At T3, only BNT and Category Fluency remained impaired (p<0.05), with no significant declines compared to T1.

 

Conclusions

The DIMA is the first test-battery to detect peri-operative impairments in patients with left- or right-hemispheric gliomas at different linguistic levels. Pre- and postoperative impairments were found on phonological (repetition) and syntactic subtests (sentence completion) of the DIMA. The semantic level (odd-picture-out) was only impaired short-term postoperatively. Regarding the standard tests, BNT and Verbal Fluency detected impairments at all test moments, while Token Test scores did not deviate.

Awake surgery seemed to have protected most linguistic functions at long-term. However, the DIMA appeared sensitive to detect postoperative decline compared to baseline level. All phonological DIMA subtests captured short-term decline, in line with earlier evidence for the value of (non-)word repetition (Sierpowska et al., 2017). As expected, Verbal Fluency was also sensitive to short-term deterioration. DIMA sentence completion was the only sensitive test to detect further long-term decline, reflecting earlier spontaneous speech analyses (Satoer et al., 2018). Left-hemispheric tumor localization only affected standard test performance. HGG patients had more severe impairments than LGG on DIMA repetition and standard tests (BNT and TT). We advise adding the DIMA to standard language evaluation of glioma patients, as it allows for more detailed counseling about language outcome at the different linguistic levels with indications for rehabilitation.

WAB-R Profiles in Progressive Speech and Language Disorders: Longitudinal Findings
PRESENTER: Heather Clark

ABSTRACT. Previous work demonstrated that relative performance across WAB-R composite scores had good agreement with consensus diagnosis of the semantic (svPPA) and agrammatic variants of PPA (with or without AOS; agPPA), and primary progressive AOS (PPAOS). This study examined performance of these metrics longitudinally in 69 adults diagnosed with svPPA, agPPA or PPAOS. Performance ratios were calculated between: the auditory comprehension and naming and word-finding composite scores (Comprehension:Naming); the auditory comprehension composite score and fluency rating (Comprehension:Fluency); and the rating of information communicated during the spontaneous speech tasks relative to the naming composite score (Information Content:Naming). The relative size of these three ratios yields a profile that is flat (all ratios are roughly equivalent), has a “dip” (Comprehension: Fluency ratio is smaller than the other two ratios), or has a “peak” (Comprehension: Fluency ratio is larger than the other two ratios). Meaningfully different was defined as a difference of 0.20 or greater between both Comprehension:Naming and Information Content:Naming ratios (in the same direction) and the Comprehension:Fluency ratio. Ratio profile agreement between first and final visits was assessed, relative to diagnosis. We observed maintenance of dip and peak profiles between timepoints. In contrast, fewer than half of flat profiles remained flat at the final visit. The findings suggest that “dip” and “peak” profiles had good specificity for the semantic and agrammatic variants, respectively. Flat profiles had excellent sensitivity for PPAOS, but also overidentified svPPA at initial visit. Flat profiles also overidentified agPPA, although a proportion of participants indeed evolved from PPAOS (flat profile) to agPPA (peak profile) over time. We conclude these ratio profiles add value beyond the AQ.

Improving Automatic Semantic Similarity Classification of the PNT
PRESENTER: Alexandra Salem

ABSTRACT. Background

In the Philadelphia Naming Test (PNT; Roach et al., 1996), paraphasic errors are classified into six major categories according to three dimensions: lexicality, phonological similarity and semantic similarity to the target. Our team has developed software called ParAlg (Paraphasia Algorithms) for automatically classifying paraphasias by these three dimensions given a transcription (Fergadiotis et al, 2016, Mckinney-Bock & Bedrick, 2019). The classifier takes the form of a decision tree mirroring the scoring of the PNT, as illustrated in Figure 1. In ParAlg, the semantic similarity of a response to the target is determined with a binary classifier that uses a language model: a machine learning based model that produces meaningful representations of words in a vector space. Previously, the language model used in ParAlg was word2vec (Mikolov et al. 2013).

 

Objectives

This work focuses on improving the semantic similarity classification in ParAlg. We fine-tune a modern language model called BERT (Bidirectional Encoder Representations from Transformers; Devlin et al., 2019) alongside a binary classifier to categorize each transcribed response to a PNT item as semantically similar to the target or not. BERT produces contextual vectors, meaning the representation of a word changes based upon the context given to the model, in contrast to the static representations in word2vec. Therefore, BERT may allow for more accurate processing of polysemous words. Finally, we compare ParAlg classification results using word2vec and BERT.

 

Methodology

Our dataset is a subset of the Moss Aphasia Psycholinguistic Database (MAPPD; Mirman et al., 2010) consisting of 11,999 clinician-transcribed and categorized paraphasias from 296 participants (mixed, semantic, abstruse neologism, phonologically-related neologism, formal, other). Errors are classified using ParAlg with word2vec or BERT to make semantic judgments. Performance is evaluated using metrics computed based on the corresponding classification matrices using 5-fold cross validation in order to prevent over-fitting.  

 

Results

Overall, BERT outperformed word2vec when determining the semantic similarity of each error to the target. Using BERT led to 556 semantic misclassifications compared to 1,084 with word2vec. There was a downstream effect of these improvements on categorization in the PNT.

Further, a post-hoc qualitative analysis suggests that BERT’s improved performance is associated with its ability to handle polysemy. For example, the most common word2vec error is marking the target/response pair glass/cup as semantically dissimilar. This is due to the fact that word2vec has one vector for each word despite polysemy; the closest word to cup in word2vec space is championship, since it is trained on news data. BERT, however, correctly classifies all 24 of those as semantically similar, since it produces contextual vectors and is able to refine to the appropriate meaning of cup.

 

Conclusions

Changing from word2vec to the contextual language model BERT makes substantial improvements to semantic similarity classification by reducing the number of semantic misclassifications by half. Moreover, BERT corrects a number of particularly “naïve” word2vec mistakes that affect the face validity of the system and may pose a significant implementation barrier for adoption by the clinical community.

Examining Cognitive-Linguistic and Learning Abilities in PWA Utilizing Language Retrieval and Novel Object Pairing Tasks
PRESENTER: Preeti Rishi

ABSTRACT. Previous research has frequently overlooked individual variability in learning and cognitive abilities that may influence aphasia therapy outcomes. Thus, the current project examined average learning outcomes and retention as well individual learning profiles across people with aphasia (N=9), examining errorless and errorful learning of short behavioral tasks: a novel object pairing learning task and a word retrieval task. Other cognitive-linguistic abilities, including memory, executive functioning, and language ability, were also measured to determine how these cognitive-linguistic abilities influenced learning success in errorless and errorful conditions. Across the participants with aphasia, errorless and errorful methods were found to result in comparable outcomes immediately (p=0.12) with errorful learning resulting in greater retention on delayed testing one day after learning (p=0.08). Individually, participants were found to display various profiles of learning when engaging in errorless vs. errorful-structured tasks such that some individuals displayed learning across both conditions, some only within one condition, and others in neither. These differences in learning outcomes across errorless and errorful conditions were likely mediated by the unique cognitive demands of each condition. Moreover, errorful learning moderately correlated with Wisconsin Card Sort Task outcomes (r=0.58, p=0.10) indicating that increased executive functioning mechanisms of shifting, inhibiting, detecting errors, and feedback processing may be associated with success in errorful learning specifically. This work contributes to foundations important for understanding learning at an individual level in people with aphasia. With this work, speech-language pathologists may be better able to assess and describe learning profiles of individuals with aphasia in order to appropriately tailor therapy to suit learning preferences for maximal outcomes.

Localization Patterns of Language Errors in the Brain during Direct Electrical Stimulation: A Systematic Review
PRESENTER: Ellen Collee

ABSTRACT. Introduction

Awake craniotomy with direct electrical stimulation (DES) is the standard treatment for patients with eloquent area gliomas. Language errors are detected with DES and indicate functional boundaries that need to be maintained during tumor resection to preserve quality of life. Traditionally, counting and object naming were used during DES. The Dutch Linguistic Intraoperative Protocol (DuLIP, De Witte et al., 2015) was the first linguistic test-battery with tasks at different linguistic modalities and levels (production, comprehension, reading, phonology, semantics, syntax) divided into cortico-subcortical areas. The DuLIP model was based on the (limited) available literature and knowledge at the time. As much has been done since, the model needs to be updated. We investigate the localization patterns of different speech/language errors during awake craniotomy.  

 

Methods

A systematic review was conducted and 102 studies were included reporting on speech arrests and specific speech/language errors and their corresponding brain locations during awake glioma craniotomy with DES. Language errors were counted and categorized in modalities or levels: speech errors (speech arrest, dysarthria/anarthria, verbal apraxia), speech initiation difficulty, semantic errors, phonemic errors, syntactic errors, reading errors and writing errors.

 

Results

A wide distribution of brain locations (hemispheres combined) for all speech/language errors (n=930) was found with different patterns. Cortically, errors occurred most often in the precentral gyrus (22%), while subcortically at the inferior fronto-occipital fascicle (IFOF: 11%). Localization patterns for specific speech/language errors were also found: speech errors (n=388)-precentral gyrus (43%), inferior frontal gyrus (9%), postcentral gyrus (4%), frontal aslant/striatal tract (3%); speech initiation difficulty (n=9)-frontal aslant tract (33%), frontal striatal tract (22%); supplementary motor area (22%); semantic errors (n=128)-IFOF (57%), superior temporal gyrus (9%); phonemic errors (n=115)-arcuate fascicle (52%), superior longitudinal fascicle (10%), uncinate fascicle (3%); syntax (n=15)-inferior frontal gyrus (27%); reading errors (n=25)-temporal lobe (48%), inferior longitudinal fascicle (32%) and writing errors (n=7)-superior parietal gyrus(71%).

 

Conclusions

This is the first systematic review on the localization of speech/language errors during awake craniotomy. The localization of most speech/language errors are consistent with the assumed functionality of those brain locations as presented in the DuLIP model. However, additional locations for articulation/motor speech, phonology, reading and writing were found and are added to the model, as shown in blue italic print (Table 1). Importantly, many articles exclusively administered object naming, which is not always sensitive enough to find deficits at different linguistic modalities. Subsequently, errors may have been missed. Therefore, we suggest to always use multiple language tests tapping into different modalities and/or levels. Next to DuLIP, various options are available (e.g. Dragoy et al., 2020; Ohlerth et al., 2020, Rofes et al., 2017; Sierpowska et al., 2017).

The updated DULIP model should be considered for future selection of perioperative language tasks to improve language testing/monitoring, which may pave the way to a better postoperative language outcome. The possible relation between different intraoperative speech/language errors and postoperative language outcome has yet to be determined.

Noun-verb dissociations in aphasia: Exploring performance patterns across naming and single word comprehension tasks.
PRESENTER: Maria Ivanova

ABSTRACT. Introduction

Naming deficits are the most pervasive symptoms of aphasia, with recent research suggesting that verb retrieval is particularly challenging (Crepaldi et al., 2011; Mätzig et al., 2009; Thompson et al., 2012).  The extent to which this processing difficulty is specific to naming has not been ascertained, with studies showing conflicting results for noun-verb dissociations in comprehension (see Soloukhina & Ivanova, 2018).  However, few studies have directly compared performance across the two word classes on naming and comprehension tasks with items matched on relevant psycholinguistic properties in large groups of people with aphasia (PWA).  Thus, the goal of this study was to probe further the extent and nature of these word class dissociations in aphasia.

 

Methods

Individuals with different types and severity of post-stroke aphasia (n=77) completed the Russian Aphasia Test (RAT) (Ivanova et al., 2021), a comprehensive standardized aphasia battery.  Here we focus specifically on performance on the four lexical-semantic subtests of the RAT: object/action naming and single word comprehension of nouns/verbs.  Each of the four tasks contained 24 items ranging in difficulty.  The target stimuli across all four noun and verb tasks were matched on relevant psycholinguistic parameters (lexical frequency, imageability, age of acquisition, name agreement, image agreement, object/action familiarity, visual complexity), permitting direct comparison of performance within and across domains.

 

Results & Conclusions

Results of linear mixed modeling showed that performance on the naming tasks was more impaired compared to the comprehension tasks. Surprisingly there was no significant effect of word class. That is accuracy for both production and comprehension of nouns was similar to that of verbs, although numerically PWA performed slightly worse on the verb comprehension subtest relative to noun comprehension.  Further, there was a significant interaction between aphasia severity (as determined by the overall performance on the RAT) and decrement in performance: participants with moderate and severe aphasia showed greater disparity between comprehension and naming subtests.  Cumulatively, we did not observe more pronounced verb processing deficits relative to that of nouns, indicating at least partly similar mechanisms underlying observed impairments.

Additionally, we investigated the interrelations between performance on these four subtests and sentence and discourse level subtests of the RAT by performing partial correlations between subtest scores accounting for aphasia severity. Performance on the naming and single word comprehension subtests was not significantly correlated.  Interestingly, it was verb (but not noun) comprehension that was related to sentence and discourse level comprehension, underscoring the role of the verb as a central sentence element.  Accordingly, this finding is in line with the hypothesis that verb grammatical properties are processed during comprehension and production of isolated verbs (Thompson et al., 2010), linking impairments at the single-verb and sentence levels. A similar significant relationship was observed between action naming and discourse production, with correlation between action naming and sentence production trending towards significance. Thus, while we were not able to uncover reliable noun-verb dissociations in performance on naming and comprehension tasks, we did observe that specifically verb processing deficits affected higher level linguistic impairments in aphasia.

What drives task performance in fluency tasks in people who had COVID-19?
PRESENTER: Adrià Rofes

ABSTRACT. Introduction

People who had COVID-19 may have persistent symptoms after recovery (e.g., difficulty breathing, fatigue; Carfì et al., 2020). Also, they may score below the norm on tasks assessing attention, memory, executive functions, and language (Kumar et al., 2021). Fluency tasks have shown to be affected in some individuals with COVID-19 (Almeria et al., 2020). However, the specific factors driving the such performance – as measured by the total number of correct words – are still under scrutiny. The aim of this work is then to understand (1) whether people who had COVID-19 are more impaired in animal or letter fluency relative to a normative sample; and (2) whether their performance can be explained with demographic factors, common COVID-19 symptoms, and word properties of fluency tasks. This work derives from the need to look beyond the total number of correct items in tasks that assess language (e.g., Shallice, 1988; Whitworth et al., 2014; cf. Cutler, 1981). This approach – which includes looking at the characteristics of the words produced in fluency tasks (e.g., frequency, age of acquisition, concreteness) – has shown to be relevant to classify and to describe the language impairments of people with neurodegeneration (Rofes et al., 2019, 2020).

 

Methods

Eighty-four Spanish-speaking people who had COVID-19 responded to a 60 second animal fluency task and to a letter (“P”) fluency task, 10-35 days after hospital discharge or self-quarantining. We obtained demographic factors (i.e., age, sex/gender, education in years), common symptoms (i.e., anosmia, anxiety score, breathing difficulty, coughing, days of hospitalization, D-dimer, depression score, dermatological alterations, diarrhea, dysgeusia, fatigue, ferritin, fever, handedness, headache, myalgia, subjective complaints), and calculated eight word properties for each correct word (i.e., age of acquisition, concreteness, familiarity, word length, frequency, imageability, orthographic similarity, and phonological similarity). The normative sample consisted of 179 healthy adults aged 18 to 49 (Casals-Coll et al., 2013) and 346 healthy adults aged 50-94 (Peña-Casanova et al., 2009). We used a Chi-square test to address Aim 1, and Random forests and Conditional inference trees to address Aim 2.

 

Results

People who had COVID-19 were not more impaired in any of the fluency tasks relative to the normative sample. Age of Acquisition and Frequency were most important to predict correct words in animal fluency. Concreteness and depression scores were most important to predict the total word count in letter fluency. No other measure (i.e., demographic, linguistic, symptom/factor) showed as important.

 

Conclusion

People who had COVID-19 were not more impaired in fluency tasks than healthy participants. Word properties described in studies of fluency and other tasks were relevant to explain animal fluency. Concreteness and depression relevance to letter fluency were not expected and may be specific to people who had COVID-19. The results await replication in a bigger sample including fluency measures of executive functions and correlations with other test scores.

Changes of lived experience in persons with aphasia subsequent to the COVID-19 outbreak: A qualitative study to reflect perspectives of aphasia service receivers and providers

ABSTRACT. Introduction

The COVID-19 pandemic has negatively influenced the communication, community engagement, and social activities of Persons With Aphasia (PWA) worldwide (Kong, 2021). One of the most apparent impacts was the significant changes in their receiving of speech therapy (ST) services due to the rapid emergence of telepractice. PWA in Hong Kong, a city with confirmed COVID-19 cases relatively early on in the pandemic, have been equally affected (Fong et al., 2021).

Given the scarce reports focusing on PWA, this study aimed to examine changes of lived experience and access to aphasia-specific ST services among PWA (and caregivers) during different outbreak phases of COVID-19 in Hong Kong. The perception on the quality and effectiveness of these services amid COVID-19 were also compared between service receivers (i.e. PWA and caregivers) and providers (i.e. speech therapists).

 

Methods

Semi-structured interviews are being conducted involving fifteen PWA (and their caregivers), as well as ten speech therapists from five different clinical settings. In particular, service receivers were guided to individually report their health-care related, psychosocial, and financial impacts, experiences with changes/constraints in receiving ST services, and perceptions towards the use and efficacy of teletherapy across different phases of the outbreak. As for the speech therapists, they were guided to summarize their practice amid the pandemic (with a specific focus on implementation of telepractice to PWA and clients’ responses to this transition of training mode), and to reflect their perceptions on the effectiveness and limitations (i.e., pros and cons) of service delivery.

Analysis of collected data was performed using a content analysis to determine the reported changes towards lived experience and to compare different perspectives towards the implementation of telepractice of aphasia training. Moreover, net promoter scores (NPS) and Likert scales were used, respectively, to measure changes in satisfaction on telepractice as well as perceived difficulties in telepractice delivery and psychological impacts.

 

Preliminary results

Preliminary results revealed that the pandemic had reduced PWA’s chances of social gatherings; there was also increased stress level induced by mask wearing and the frequent need to practise good personal hygiene. Moreover, responses from the speech therapist group seemed to indicate a lower perceived efficacy of telepractice to deliver aphasia services. For example, replacement of face-to-face by online sessions during COVID was reported to create particular difficulties in evaluating PWA and sustaining their attention. Compared to in-person sessions, the online platform seemed to be less capable to ensure PWA’s acquisition of new language skills and maintenance of communication.

 

Conclusion

Unlike the western clinical world, teletherapy has not been getting its popularity in Hong Kong until very recently because access to aphasia services has typically not been affected by distance. It is anticipated that the final findings of this study would allow us to better understand any mismatch between the actual aphasia services provided by speech therapists and the lived experience (and expectations) of service receivers in Hong Kong as they were navigating the COVID-19 pandemic. Subsequently, useful insights can be made for further implementation and enhancement of ST services.

Predictors of Therapy Outcome after Intensive Treatment in Post-acute and Chronic Aphasia
PRESENTER: Dorothea Peitz

ABSTRACT. Introduction

Intensive speech and language therapy (SLT) seems to be effective (Breitenstein et al., 2017, Brady et al., 2016). However, a better understanding of who benefits most is necessary to make efficient decisions about restricted therapeutic resources (Persad, Wozniak & Kostopoulos, 2013). A recent meta-analysis has shown greatest improvements for younger age and earlier treatment enrollment (Ali et al., 2021). Some potentially important factors, e.g. handedness and education, were excluded in the meta-analysis due to heterogenous data. We present preliminary statistical analyses based on a more homogenous dataset including these variables.

 

Methods

This retrospective study investigates potential predictors of the outcome of an intensive SLT at RWTH Aachen University hospital. The Aachen Aphasia Test (AAT) was used as primary outcome measure at the end of each treatment cycle of 6-8 weeks. Between 2003-2020, data of the first treatment cycle of the inpatients with aphasia in the post-acute (>6 weeks post onset, n=273) or chronic phase (>12 months post onset, n=522) were included. Each patient was classified as a responder to treatment if at least one of the AAT subscales, subtests or the profile level showed significant improvement between the latest pre-treatment and the outcome assessment (Poeck, Huber & Willmes, 1989). In a first step, we investigated the influence of Age, Sex, Handedness, Education, Etiology, Lesioned Hemisphere, Time-post-onset, Aphasia syndrome, Aphasia severity, and Number of SLT sessions (45-60 minutes per session) with univariable logistic regression analyses. Potential predictors were entered in a multivariate model.

 

Results

Of the 795 included patients 56.1% were classified as responders. Univariate binary logistic regression analyses showed Age (p<.001), Number of SLT (p<.001), Aphasia severity (p=.003), Time-post-onset (p=.044) and Aphasia syndrome (p=.086) as potential predictors of therapy outcome. The multivariate logistic regression analysis with these predictor variables was significant (p<.001) and explained 6.7% of variance. The only significant variables in this model were Age, Time-post-onset (both negatively associated with good response), and Number of SLT sessions (positively associated with good response).

 

Conclusions

This study confirms and extends the findings of Ali et al. (2021) with a more homogenous single-center dataset including additional variables such as Handedness, Education, and Number of SLT sessions. Our results support the assumption that younger age and earlier enrollment seem to be beneficial for good treatment response. Furthermore, our findings imply that Number of SLT seems to be an important factor for language improvements. However, these results are not consistent with Breitenstein et al. (2017) who found only baseline stroke severity as a significant predictor of immediate improvement of verbal communication after intensive SLT. These inconsistent results and the small percentage of explained variance of the outcome variable indicate that predicting treatment responsiveness remains complicated and requires further examination. We are currently adding more data to the database and conducting further analyses.

Demographic, Health, and Neural Factors Associated with Chronic Aphasia Severity
PRESENTER: Lisa Johnson

ABSTRACT. Introduction

Lesion size and location are often reported as the most reliable factors that predict severity of language impairment in persons with post-stroke aphasia. Several studies have also found that demographic and health factors are related aphasia severity. The extent to which these factors predict language impairment, beyond traditional cortical measures, remains unknown. Identifying and understanding the contributions of factors to predictive models of severity constitutes critical knowledge for clinicians interested in charting the likely course of aphasia in their patients and designing effective treatment approaches in light of those predictions.

 

Methods

Utilizing neuroimaging and language testing data from 224 individuals with chronic aphasia, we conducted a lesion symptom mapping analysis (LSM) to identify regions which predict overall aphasia severity scores. We used residual values from a linear model between severity and proportion damage to these critical regions as the dependent factor in three models: 1) Demographic Model; 2) Health Model; and 3) Overall Model.

 

Results

Two regions were identified to be associated with aphasia severity: left posterior insula and left superior longitudinal fasciculus. The Demographic Model revealed cognitive reserve and time post-stroke as significant predictors of severity (p = 0.004; p = 0.03), and the Health Model found that the extent of periventricular hyperintensities was associated with severity (p = 0.01). An interaction between presence of diabetes and exercise frequency was also found (p = 0.04), indicating that those with comorbid diabetes who exercise more had less severe aphasia than those who do not exercise. Finally, the Overall Model showed a relationship between aphasia severity and time post-stroke (p = 0.02), periventricular hyperintensities (p = 0.001), and a significant interaction between diabetes and exercise frequency post-stroke (p = 0.03).

 

Summary

Results from this study add to the growing literature suggesting demographic variables can shed light on individual differences in aphasia severity beyond lesion profile. Additionally, our results emphasize the importance of cognitive reserve and brain health in aphasia recovery.

Effect of Grid size and Grammatical category of referents on Identification of symbols in Persons with Aphasia and Neurotypical Adults
PRESENTER: Vineetha Philip

ABSTRACT. Introduction

The variables in the design of augmentative and alternative communication (AAC) interface displays have an important impact on an individual's performance using an AAC system. Currently, research that investigates the effect of different AAC system features on ability of an individual to perform various communication tasks is very limited, especially in persons with aphasia. Hence, the current study aims to investigate two effects of two AAC interface variables on the ability of persons with aphasia to identify symbols in a grid display. The variables being grid size or the number of symbols per display and the grammatical category of referents.

Method

The study participants included 20 persons with aphasia (inclusive of 10 anomic aphasia and 10 Broca’s aphasia) and 20 age, gender and education matched neurotypical adults; both native to Kerala, a south-western state in India with Malayalam as their native language. The participants were expected to identify a total of 60 target PCS symbols belonging to different grammatical categories (i.e., nouns, verbs, adjectives and prepositions) from each of the four grid sizes (4, 8, 12 and 16). The accuracy, efficiency, and response time taken to identify symbols in each of the participant group were subjected to analysis.

Results

The results showed that the mean accuracy and efficiency scores declined, and response time increased with an increase in the grid size in both participant groups; however, the rate of decline in persons with aphasia was much higher relative to neurotypical adults. It was also found that both participant groups accurately and efficiently identified more nouns with shorter response time followed by verbs, adjectives, and prepositions.

Conclusion

The results of the current study are in consensus with findings from previous research. The effect of grid size on symbol identification can be attributed to the increased cognitive demands imposed by the increased number of symbols per display. The effect of the grammatical category of referents can be attributed to the differences in symbol iconicity or referent concreteness. Both increase in grid size and use of less concrete symbols requires PWA to rely on perceptual and conceptual cues to identify symbols, which further taxes the already impaired linguistic and cognitive systems. In line with existing literature, the current study reemphasizes the importance of considering different design variables to minimize the operating demands of an AAC system, thus improving their use in persons with aphasia.

Hemodynamic Brain Responses during Working Memory Load Processing in Aphasia

ABSTRACT. Introduction

Functional near-infrared spectroscopy (fNIRS) is a noninvasive optical brain imaging technique used to measure hemodynamic activation within the brain in response to stimulation and workload (MehagnoulSchipper et al., 2002). Workload induced by different conditions result in varying hemodynamic responses (Sun et al., 2019). The hemodynamic responses are recorded as relative concentration of oxygenated hemoglobin (HbO) and deoxygenated hemoglobin (HbR) in different regions of the brain.

Several neuroimaging technologies have been used with post-stroke individuals with aphasia (IWA) to understand their neurobehavioral performance and neurological markers associated with language and cognitive difficulties. Very few studies have used fNIRS technology to understand brain activations and hemodynamic markers of behavior in aphasia (example, Sakatani et al., 1998). Therefore, the aim of this study is to assess the feasibility of a fNIRS system in measuring hemodynamic responses in IWA on cognitive tasks of varying mental workload.

 

Methods

Four IWA with a history of unilateral stroke and no other associated neurologic or psychologic disorders have participated to date. All participants completed the Western Aphasia Battery-Revised, Beck’s Anxiety Inventory, and Geriatric Depression Scale prior to the fNIRS experimentation. A computer-delivered working memory (n-back) task varying across two load conditions:1- back (low load) and 2-back (high load) was used for the experiment. Prefrontal cortical activity was measured using the 16-channel optode band from the fNIR 203C system (Biopac Systems, Inc.). The COBI studio software was utilized to record the hemodynamic neural activity and the fNIRSoft software was used for data processing. Prior to analyses, raw light intensity measures were filtered using a low pass filter of 0.1Hz followed by the ambient light removal. Relative changes in oxygen concentration during task in comparison to the initial rest period was calculated using the modified Beer-Lambert Law. The HbO and HbR are the variables of interest and were recorded across both tasks. The concentration levels for each variable were averaged across all channels and for all four participants.

 

Results

Hemodynamic responses due to workload induced by n-back tasks were discriminated from the resting state. The HbO and HbR activation patterns on 1-back and 2-back are presented in figure 1. Several interesting findings were noted: 1) differential pattern of oxygen consumption in the brain were observed between 1- and 2-back tasks, 2) oxygenated hemoglobin was high for the 2-back (high) load condition than the 1-back (low) load condition); HbOhigh > HbOlow i.e., mental workload induced by 2-back condition was more, 3) deoxygenated hemoglobin concentration was more for the 1-back (low) load condition than the 2- back (high) load condition; HbRhigh < HbRlow. Three IWA (75%) showed elevated HbO in unilateral hemisphere in conjunction with slight increase in HbR on low load task (Kim et al., 2018). Subjective ratings of workload were higher for 2-back.

 

Conclusions

Overall, our data support the suitability of fNIRS to detect mental workload in IWA. With the inclusion of more participant data, robust information can be obtained regarding the potential of fNIRS as a brain computer interface for IWA.

Investigating factors of aphasia recovery

ABSTRACT. Introduction

Even though most studies support the notion that there is a degree of language improvement with advancing time even in untreated aphasic patients (Lazar & Antoniello, 2008),  the specifics of the association between recovery of particular language indices and possible predicting factors, such as demographic or lesion variables, are yet to be fully elucidated.  In the present study we attempt to investigate such relationships focusing on three language domains: speech output, comprehension and naming.

 

Methods

Forty-one patients (13 women),with acquired aphasia due to a left single stroke, 23-84 old (mean:56,8; SD: 14,79) with 6-18 years of formal schooling (mean: 11,43, SD: 3,63) were recruited. All participants were right-handed and native Greek speakers. CT and/or MRI scans were obtained for each patient and lesion sites were identified and coded by two independent neuroradiologists for 16 predetermined left hemisphere areas (Kasselimis et al., 2017). The total number of lesioned areas served as an index of lesion extent (lesion score). The patients were assessed in two testing times by a neuropsychologist. Mean time post onset for the first examination was 18.68 days and for the second was 305.53 days. To assess language deficits, we used the Boston Diagnostic Aphasia Examination – Short Form (BDAE-SF), and the Boston Naming Test (BNT). Articulation rate and speech rate were calculated based on a speech sample elicited from the Cookie Theft Picture for BDAE-SF, for each patient.

 

Results

The BNT score and specific BDAE-SF subscales, that is the three auditory comprehension subscales (words, commands, complex material), as well as speech and articulation rate were included in the analyses. Comparison between the two times of testing using paired-samples t-test showed that mean performance was significantly higher for all measures in the second examination. Regression models revealed that score differences between testing times were not shown to be generally affected by lesion extent, age of onset, and years of formal schooling, with one exception: age seemed to be a marginally significant predictor for improvement of comprehension of commands scores (having an inverse association with performance). We then selected four specific lesion loci (inferior frontal, superior and middle temporal gyri, as well as the inferior parietal lobule) and tried to investigate whether lesion in these areas may have an effect on performance separately for the first and second examination phase, by implementing mixed effects models. Interestingly, there was a significant effect of frontal, temporal and parietal lesions on performance in the acute/subacute, but not in the chronic phase.

 

Conclusions

Although overall improvement of several language indices was evident in our patients, our findings do not provide a clear-cut answer with regard to demographic or lesion factors that may have contributed to the recovery of such language functions. Notably, particular lesion loci seemed to affect performance in the acute/subacute but not in the chronic phase. In conclusion, we argue that lesions affecting specific cortices of the perisylvian language network may be of greater importance in the early stages of aphasia, rather than in the chronic phase.

Investigating dosage frequency effects on treatment outcomes following self-managed therapy via a digital health platform
PRESENTER: Claire Cordella

ABSTRACT. Introduction

Speech-language therapy is known to improve outcomes in post-stroke aphasia, particularly when it is high intensity (Brady et al., 2016). However, intensity is itself a multifactorial treatment parameter that is determined by several factors, including dosage amount, dosage frequency, session duration, and total intervention duration (Baker, 2012; Cherney, 2012; Warren et al., 2007). Understanding the effects of these sub-parameters on therapy outcomes is a critical first step towards optimizing treatment delivery for individuals with aphasia. The aim of the current study was to examine the association between one such intensity sub-parameter – dosage frequency – and change in performance on remotely delivered tasks during patients’ first 10 weeks of self-managed therapy.

 

Methods

Anonymized data from post-stroke survivors who used the Constant Therapy application between late 2016 - 2019 were shared with Boston University. All patients consented to use of their data for research purposes. In the current study, we included only users who engaged with the app for at least one day in 10 of their first 15 calendar weeks of use, resulting in a study sample of 2,252 patients.

The current study includes therapy data for tasks spanning 13 skill domains: (1) auditory comprehension, (2) phonological processing, (3) production, (4) reading, (5) writing, (6) naming, (7) attention, (8) auditory memory, (9) visual memory, (10) analytical, (11) arithmetic, (12) quantitative, and (13) visuospatial. For each patient, the following variables were extracted: age, time since stroke, sex, baseline severity, and dosage frequency. Dosage frequency was defined as median days/week of app usage over the 10-week therapy period, binned into categories of 1, 2, 3, 4, or 5+ days/week. The outcome variable of interest was domain score, a composite performance measure that takes into account overall performance accuracy and task difficulty level.

Data were analyzed using linear mixed-effects models. Domain score was the dependent variable, with fixed effects of time (week number), age, time since stroke, sex, baseline severity, dosage frequency, and a time*dosage frequency interaction. Individual users and domain were modeled as random effects.

 

Results

Figure 1 shows performance trends over time across all skill domains, separated out by dosage frequency group. Model results revealed significant main effects of time, time since stroke, baseline severity, and dosage frequency on domain score in the 10-week treatment period. Crucial to our question of interest, the time*dosage frequency interaction was also significant, with greater change over time for higher versus lower dosage groups. Post-hoc comparisons revealed significantly greater performance change for users who practiced 4 or 5+ days/week compared with users who practiced 1, 2 or 3 days/week (Table 1). The result of greater improvement for higher versus lower dosage frequency groups was true not only across all domains, but also within a majority (i.e., 9 of 13) of individual subdomains.

 

Conclusions

Study results demonstrate that increased dosage frequency is associated with greater therapy gains over a 10-week treatment period of self-managed teletherapy. This result provides preliminary evidence to help guide clinicians in their recommendations to patients regarding optimal practice frequency for self-managed teletherapies.

The experiences and preferences of speech and language therapists regarding aphasia therapy apps
PRESENTER: Pauline Cuperus

ABSTRACT. Introduction

People with aphasia (PWA) benefit from speech and language therapy that is administered frequently and preferably over a long period of time (Brady et al., 2016). In reality, this is often difficult to achieve for reasons including therapist availability, financial load, and physical impairments. Using aphasia therapy apps could be a means of meeting clinical recommendations related to dose and frequency (Brady et al., 2016). We currently know little about speech and language therapists’ (SLTs) experiences and perceptions of using therapy apps. This information is, however, essential in order to design products that meet the users’ needs (Bannon, 1986; Norman & Draper, 1986; Swales et al., 2016) and that are therefore more likely to be used in clinical practice. The current study therefore aimed to answer three main research questions:

1.  What are SLTs’ current experiences with regards to aphasia therapy apps?

2. What are SLTs’ perceptions of PWA’s smartphone/tablet use and the suitability of online, independent therapy for this target group?

3.  What do SLTs perceive to be facilitators and barriers to the use of aphasia therapy apps?

 

Method

Participants were recruited from Australia and The Netherlands. All respondents self-identified as SLTs and/or clinical linguists. The survey contained 4 open and 12 multiple choice questions pertaining to our research questions and was presented in Qualtrics (Qualtrics, Provo, UT).

 

Results

Our survey respondents consisted of 29 Australian (mean age=35.5 years, 28 female) and 35 Dutch SLTs (mean age=36.2 years, 32 female). The open questions resulted in extensive feedback regarding current experiences with therapy apps and SLTs’ opinions regarding future therapy apps. The most frequently cited facilitators for increasing the use of aphasia therapy apps were user-friendliness, targeting different language modalities and using apps as an addition to regular therapy. The most frequently reported barriers were the costs, the client potentially not owning a tablet and the client’s computer (il)literacy.

 

Conclusion

To summarise, surveyed SLTs were very positive towards aphasia therapy apps. Encouragingly, they report frequent smartphone/tablet use even in their relatively elderly caseloads and were confident in their clients’ abilities to use aphasia therapy apps independently at home. We therefore conclude that there is plenty of support in the SLT community for increasing the use of aphasia therapy apps, and this could be a means of meeting clinical recommendations regarding intensity and dose of treatment (Brady et al., 2016).

Nevertheless, our respondents also quite clearly indicated some barriers that they had experienced regarding the use of therapy apps. While it is not within researchers’ power to tackle all of these, the onus is on aphasia researchers and app developers to listen and respond to SLTs’ experiences and feedback and to improve the design of their digital therapies accordingly. In line with Swales et al. (2016), the extensive feedback that we have received clearly underlines the importance of directly involving clinicians in the aphasia app development process.

Let’s zoom in on the teleassessment of speech intelligibility
PRESENTER: Gregoire Python

ABSTRACT. Introduction

In speech and language rehabilitation, it is crucial for patients to recover intelligible speech. Intelligibility can be successfully assessed in-person by computer (Haley et al., 2011), but it is also interesting to assess intelligibility online, as phone-/videocalls or voice messages are daily used. The aim here is to investigate 1) whether it is feasible to teleassess speech intelligibility and 2) to what extent remote recordings via Zoom are comparable to in-person recordings to score intelligibility.

 

Methods

Fifteen healthy speakers (25-83 y.o.) without neurological or psychiatric disorders and one aphasic individual (45 y.o.) with post-acute transcortical motor aphasia and mild apraxia of speech took part in this experiment. Speech intelligibility was evaluated by a recent computer-based assessment tool, the MonPaGe screening protocol (Laganaro et al., 2021). In the intelligibility game-like task, participants had to produce pre-defined sentences containing random target-words appearing on a colored grid, in order to give instructions to the experimenter about the target-words and their location. The intelligibility score (max. 15) represented the number of target-words correctly understood by the listener.

 

All participants underwent teleassessment with three simultaneous sources of recordings:

1) “local high-quality (HQ)”: speech is recorded in-person on PC laptop with professional microphone and an external USB soundcard;

2) “local standard-quality (SQ)”: speech is recorded in-person on Apple laptop with internal microphone and the WAVE sound files are automatically transferred to an online server;

3) “remote”: speech is recorded by the remotely located experimenter on its Apple laptop running MonPaGe with internal speakers and microphone via an education account of Zoom.

 

Offline intelligibility scoring was performed by three speech and language therapists on the recorded material, in order to evaluate the interrater agreement on top of the intrajudge variability between the three recording sources.

 

Results

For healthy speakers, maximal intelligibility scores (15/15 words correctly understood) were given to 58% of participants in remote recordings (min. 12/15, mean 14.4), 76% in local SQ recordings (min. 13/15, mean 14.7) and 82% in local HQ recordings (min. 14/15, mean 14.8). There was a significant main effect of the recording source (c2=15.59, p<.001). More precisely, remote recordings led to significantly lower intelligibility scores as compared to local SQ recordings (c2=22.5, p=.007) and local HQ recordings (c2=129, p<.001), but both local recordings led to similar scoring (c2=16.5, p=.22). Overall interrater agreement on intelligibility scoring was substantial (65.9% agreement; k=.63).

 

Similarly, lower mean intelligibility scores were given to the aphasic participant in remote recordings (14.3) than in local SQ (14.7) and local HQ (14.7) recordings. Overall interrater agreement was again substantial (77.8% agreement; k=.76).

 

Conclusions

Even if teleassessment of speech intelligibility seems feasible, intelligibility scores significantly decreased in remote recordings as compared to local recordings. Intelligibility scoring seems more influenced by online speech compression than by subjective perception, considering the substantial interrater agreement. It is necessary to assess speech intelligibility not only in the office, but also online, as speech-impaired individuals might suffer from intelligibility decrease in virtual communication to the same extent or even more than healthy speakers.

Polyglot aphasia secondary to Left Fronto-Parietal Tumor: A case study on Tele-rehabilitation

ABSTRACT. Introduction

Aphasia is manifested post damage to the language areas of the brain in dominant hemisphere. The damages due to a stroke and tumor might differ in various aspects. Most often the severity of aphasia is mild in tumor excision cases during the acute phase and thus need more of a family-centered approach for the betterment of life (Davie et al., 2009). A detailed evaluation sheds light on the communication difficulties in tumor cases. Various comprehensive test batteries reveal that post tumor excision a significant decrease in communication abilities can be observed (Brownsett et al., 2019).

 

Methods

The case included a female NS who is 37 years old with Non-fluent Aphasia. Aphasia was seen secondary to left fronto-parietal intradiplotic tumor, and she underwent left parietal craniotomy and excision of bone tumor and cranioplasty. An associated right hemiplegia was reported. NS is a polyglot with a pre-morbid exposure to Kannada, Hindi and English. She attended 40 tele-sessions over a span of four months (3 sessions/ week). The sessions included a systematic intervention through the virtual mode as part of tele-rehabilitation during COVID-19. Shape coding approach along with semantic feature analysis was used. Demonstration of oro-motor exercises was used to improve her oro-motor weakness to improve on her effortful dysarthric speech. Eight-step Continuum therapy technique (Rosenbek at al., 1973) for Apraxia of Speech was practiced along with the shape coding. Patient centered therapy plan was followed to improve the daily communication skills, thus the quality of life.

 

Results

The longitudinal case study pre and post therapy measurements have shown improvement in fluency, mean length of utterance, number of verb tokens, percent of verb types and percent of thematically complete sentences. A significant decrease in the frequency of phonemic paraphasias and groping behavior are also observed. Overall, everyday communication skills as per the needs of the individual and the environment have remarkably improved. The recovery was observed to be better for Kannada followed by Hindi.

 

Conclusions

Aphasia rehabilitation post fronto-parietal lobe tumor excision demands early intervention approach. Consistent speech-language therapy long with physiotherapy has contributed to the pronounced improvement in the quality of life of the person. Moreover, the tele-mode promises for the provision of consistent rehabilitation services.

TelePriming Sentence Production in Aphasia: A Feasibility Study
PRESENTER: Austin Keen

ABSTRACT. Introduction

There is considerable evidence that structural priming—the tendency to repeat a recently encountered sentence structure—reflects processes of implicit syntactic learning (Chang et al., 2000; 2006). In particular, structural priming becomes stronger between interlocutors in a dialogue setting, due to increased social attention and joint activities between listening and speaking (Pickering & Garrod, 2004). Structural priming effects also become larger when lexical information is shared between a prime and a target, i.e., lexical boost (Branigan et al., 2000). Rapidly growing evidence suggests that structural priming can implicitly facilitate sentence production in persons with aphasia (PWA), supporting its potential as a clinical tool for aphasia rehabilitation (Cho-Reyes et al., 2016; Lee & Man, 2017). Specifically, in dialogue-like tasks, PWA demonstrate improved production of complex sentences, such as passives and datives (Man et al., 2019; Man et al., 2021). 

Recently, more focus has been dedicated to improving accessibility to therapy for PWA using telepractice, which has been shown to be as effective as in-person therapy in PWA (Hall, Boisvert, & Steele, 2013). The present study investigated the feasibility of applying tele-testing to structural priming (TelePriming) with PWA when in-person testing is not possible (e.g., during the pandemic). Specifically, we asked if a dialogue-based priming task can be effective as has been seen in traditional in-person sessions, when delivered remotely using videoconferencing.

 

Methods

Ten PWA, 12 older adults (OA), and 12 younger adults (YA) participated in a dialogue-priming task, wherein participants took turns with the experimenter describing transitive pictures via videoconferencing. We measured if participants produced more passive sentences after hearing the experimenter produce passive sentences (primes) compared to active sentences. Additionally, the same verb was repeated for a half of the prime-target pairs to assess lexical boost effect. Logistic mixed-effects models were used, with the significance level set at .05.

 

Results

All three groups showed significant priming effects, as indicated by increased production of passive sentences after hearing the experimenter produce passive versus active prime sentences. In addition, the priming effects were greater when the verb was repeated between prime and target sentences in all three groups, although this lexical boost effect did not reach statistical significance in PWA. All three groups showed medium to large effect sizes of priming effects using Cohen’s d (Cohen, 1992), with greater magnitude of priming for the same verb versus different verb prime condition (same vs. different verb priming for YA: d’s = 1.7 and 1.3; OA: d’s = 5.6 and 3.2; PWA: d’s = 2 and 0.7).

 

Conclusions

The results are consistent with previous findings where PWA and healthy adults showed significant structural priming and lexical boost in a dialogue-like task in aging and aphasia (Man et al., 2019; Man et al., 2021). This study also suggests that structural priming is effective in PWA when delivered remotely using web-based videoconferencing. Therefore, implicit syntactic learning in a dialogue context remains preserved in PWA, and TelePriming provides a valid alternative to in-person testing.

Navigating the intricate world of aphasia apps: A guide for individuals with aphasia and their families
PRESENTER: Anjelica Vance

ABSTRACT. People with aphasia (PWA) need speech-language therapy to enhance their recovery. It has been shown that the first months post-stroke are critical for developing a treatment plan that will maximize language improvement (Bogal et al., 2003). Also, current evidence strongly suggests that continued therapy promotes further recovery even in the chronic stages, with more intensive therapy being more beneficial (Brietenstein et al., 2017; Fleming et al., 2021). Given the reduced availability of in-person speech therapy sessions and limited insurance coverage for ongoing speech-language therapy, there is a continued need for digital tools that could supplement in-person therapy.

 

Computer-based therapy has numerous advantages. It is accessible around the clock and can be tailored to the PWA’s specific needs in content, duration, and intensity. Furthermore, a recent randomized clinical trial showed that computerized speech language therapy, when used in tandem with traditional speech therapy, yielded significantly better results in some aspects of language recovery (Palmer et al., 2019). Thus, computerized aphasia treatments can enhance outcomes. However, finding the tools that best address the person’s unique deficit can be difficult for people with aphasia, their caregivers, and even their clinicians, particularly during the pandemic when in-person interactions are limited.

 

In this project, we systematically reviewed desktop and mobile applications for both tablets and computers that were listed as speech therapy aids, and then designed a format in which they could be easily accessed. During this process, we consulted with PWA and licensed speech language pathologists who shared their experiences. The final applications list was determined based on four criteria: ease of use, quality of instruction, quantity of information, and efficacy of the implemented therapy. We organized the list in a visual array such that spatial positioning, colors, and font size could be easily viewed by PWA. The categories were given self-explanatory names with clearly marked hyperlinks to aid access to each application. The final design resulted in a circular web of applications stemming from a central language disorder, becoming more specific as the diagram branched out. Colors were kept in the same family depending on the target deficit (auditory comprehension, reading/writing, apraxia, etc.) and changes in hue distinguished the specific applications from the general category. We also provided more detailed information about each application and its evidence-base in a table format accessible to the PWA and their caregivers.

 

Our experience in observing PWA struggle to continue with regular and intensive therapy over the course of the pandemic highlighted the difficulties in finding online resources specific to the individual’s deficits. With continued technological advances the addition and integration of app-based speech therapy with traditional approaches is inevitable and could potentially usher in a more individualization within neuroscience-based speech therapy (Lambdon Ralph, 2021). However, these novel opportunities must be accessible to those who directly benefit from these services. The hope is that the tool described here can aid in the individualization of aphasia therapy and facilitate access to these online tools for PWA, promoting uninterrupted intensive therapy and continued recovery.

An examination of retrieval practice and production training in the treatment of word-comprehension deficits in aphasia.

ABSTRACT. Introduction

Word comprehension deficits in aphasia can complicate many linguistic processes, and are difficult to treat. Recent studies suggest that practice retrieving names from long-term memory (retrieval practice) more durably strengthens future naming ability in people with aphasia compared to errorless learning (i.e., word repetition), which eschews retrieval practice [1,2,3]. The current study examined the effects of a receptive form of retrieval practice and a non-retrieval comparison treatment (restudy) on word comprehension deficits in aphasia. We also examined whether errorful comprehension items that receive naming treatment (retrieval practice versus word repetition) show improvements on a later comprehension test, a form of generalization termed task transfer.

Methods

Twelve people with chronic stroke aphasia (PWA) with a word comprehension deficit completed the study (see Table 1). The stimuli consisted of 408 picture pairs, each comprised of one target image of a common object (e.g., backpack) and one semantically-related foil image (e.g., lunchbox). Errorful pairs were selected for each PWA in an item selection phase for matched assignment into the conditions. A designation of correct, both during item selection and during the comprehension tests following treatment, required both accepting as correct a target picture (backpack) for the target word (“backpack”), and rejecting the foil picture (lunchbox) for the target word (“backpack”) on nonconsecutive trials [4].

All 12 participants completed a comprehension training module, 8 of whom also completed a naming training module because of sufficient errorful pairs to populate the full design. A single training module involved one training session followed by both a 1-day and 1-week comprehension test on those items. In the comprehension module, for comprehension retrieval practice, the participant chose between the target and foil image given the target word; for restudy, the target image was highlighted at target word onset. In the naming module, for production retrieval practice the participant attempted to name the target image; for word repetition, the target image and word were presented and the word was orally repeated by the participant. All trials ended in correct-answer feedback. Matched sets of untreated items were probed at the comprehension tests following each module.

Results

Mixed logistic regression applied to the group of 12 participants revealed robust treatment benefit from both types of comprehension training relative to untreated items (all p’s<.01) at both test timepoints with no difference between comprehension retrieval practice and restudy. In the naming module, a robust treatment benefit was observed after production retrieval practice at the 1-week test, and after word repetition at both timepoints (all p’s<.05) relative to untreated items. An analysis of retention of gains from the 1-day to 1-week test revealed better retention of accuracy in the production retrieval practice versus word repetition condition (p < .05).

Conclusion

The two forms of comprehension-based training were equally efficacious, and significant task transfer was observed from production training to comprehension performance. Production retrieval practice conferred more durable learning, compared to word repetition, similar to studies on naming treatment in aphasia [1,2,3]. Implications for aphasia treatment and models of word comprehension are discussed.

Response Generalization in Anomia Treatment: A Focus on Untrained Stimuli Selection
PRESENTER: Audrey Wayment

ABSTRACT. Introduction

Anomia is ubiquitous across persons with aphasia and remains one of the most common targets of treatment. The success of an anomia treatment can be measured by examining its ability to promote generalization, whether to untrained tasks (i.e., stimulus generalization) or untrained stimuli (i.e., response generalization; Thompson, 1989). We focus on response generalization, as there have been mixed findings with the amount and mechanism of generalization (Nickels & Best, 1996). For example, Semantic Feature Analysis (SFA; Boyle, 2010) has shown some evidence of response generalization, where semantically related untrained words are more likely to improve post-treatment than unrelated untrained words (e.g., training dog leads to improvement on cat, but not spoon; Quique et al., 2019). One explanation for response generalization in SFA is that treatment harnesses the structure of semantic memory as activation spreads from trained words to connected untrained words in the language network. Given this model, we hypothesize that more similar untrained words will exhibit greater generalization than less similar untrained words. We sought to examine the degree of relatedness of untrained word probes and its influence on response generalization in previous SFA treatment studies.

 

Methods

Ten articles from Quique et al’s (2019) meta-analysis of SFA were assessed. We considered each study’s selection criteria for treatment probes, the relatedness of untrained to trained probes, and considered improvement of untrained probes after treatment. As one measure of response generalization, we calculated the Percentage of Nonoverlapping Data (PND) for naming of untrained probes by participant in each study using Tarlow & Penland’s (2016) calculator, where the number of treatment datapoints greater than the highest baseline datapoint is divided by the total number of treatment datapoints (Scruggs et al., 1987). PND provides information about the effectiveness of treatment: highly effective (> 90%), moderately effective (90-70%), questionable effect (70-50%), and ineffective (< 50%) (Scruggs et al., 1987).

 

Results

Of the ten studies, only four had selected untrained probes based on their relatedness to the trained probes through shared features or category membership. Only one of these studies reported the stimuli used (Wallace & Kimelman, 2013), limiting our ability to systematically quantify the degree of relatedness of untrained probes. Considering Wallace & Kimelman (2013), untrained probes could share features with one or more of the trained probes with a least three or more features in common. PND ranged from highly effective to ineffective across the four studies (Table 1), with no clear relationship between relatedness of untrained probes and PND.

 

 

Conclusions

We were unable to answer our initial question: does the degree of relatedness of untrained probes influence response generalization? Further experimental studies should test a continuum of relatedness by asserting more control in the selection of untrained probes. We emphasize a need to better understand the relationship between trained and untrained probes on response generalization given predictions from language models, with an ultimate goal to enhance anomia treatment effectiveness. While we only focused on SFA, our question also pertains to other anomia treatments and types of relatedness (Castro et al., 2021).

Shared Decision Making for Persons with Aphasia: A Scoping Review

ABSTRACT. Introduction: Persons with aphasia (PWA) often retain decision making (DM) capacity, but language impairments pose barriers to participation. This can lead to their marginalization from the DM process (Stein & Brady Wagner, 2006).

Shared Decision Making (SDM) is an evidence-based approach that promotes patient involvement in the DM process within healthcare. It encourages collaboration between the patient and the healthcare professionals and the exchange of information about healthcare options, their risks and benefits, and patient and family preferences and values (Makoul & Clayman, 2006).

SDM approaches could aid in overcoming the healthcare barriers faced by PWA; however little is known about SDM for PWA.

The purpose of this scoping review was to review and synthesize available evidence on SDM approaches and interventions for PWA.

Methods: We performed a scoping review following the six stages identified by Arksey and O’Malley (2005), enhanced by Levac et al. (2010): 1) identifying the research questions, 2) identifying relevant studies, 3) selecting the literature, 4) charting the data, 5) collating, summarizing, and reporting results, and 6) consulting with stakeholders and developing a knowledge translation plan. The following databases were searched: MEDLINE, EMBASE, PsycINFO, AMED, CINAHL, ComDisDome, LLBA and Scopus from 1982 to June 2020. We included peer reviewed and grey literature that reported on SDM approaches for PWA making a healthcare treatment or screening decision. We provided a narrative synthesis of the findings.

Two reviewers independently extracted data using a standardized and pre-piloted data extraction form. Inconsistencies in extracted data were resolved through consensus with a third rater. We extracted citation information (e.g., authors, year of publication, country of origin), study information (e.g., study aims, methodological approaches), SDM definitions, conceptual or theoretical underpinnings, aphasia subtypes, setting(s) of care, SDM interventions and associated communication interventions, SDM-relevant outcomes and measures, as well as important findings and gaps in the research.

Results: After deduplication, the search yielded 5492 citations. Of these, the full text was screened for 73 articles Two studies met the inclusion criteria; one from Denmark (Isaksen, 2018), and one from the US (Brady Wagner, 2018).

The decisions discussed were: whether to continue or terminate speech therapy (Isaksen, 2018); plans related to discharge (Brady Wagner, 2018). Neither study provided a clear definition of SDM or SDM interventions. The techniques and strategies used for supporting communication with PWA were: 1) Supported Conversation for Adults with AphasiaTM; 2) Talking Mats; 3) other visual supports. No specific outcomes related to SDM for PWA were measured, nor was the effectiveness of SDM for PWA explored.

Conclusions: There is a dearth of evidence informing the use of SDM with PWA. This population is at risk of being inappropriately excluded from decisions about their health due to their communication impairment. There is an ethical imperative to design, develop, and empirically evaluate SDM interventions tailored to PWA to ensure this population can make high quality and informed decisions that are consistent with their values and preferences.

Humor Functions in Aphasia Group Therapy within a Modified Intensive Comprehensive Program Model
PRESENTER: Tori Scharp

ABSTRACT. Introduction

Group aphasia therapy is a form of service delivery that can promote communication confidence and solidarity (Simmons-Mackie, 2001) and is a critical component of Intensive Comprehensive Aphasia Programs (ICAP; Rose, Cherney, & Worrall, 2013). Elman (2007) proposed humor as a critical facet of group therapy that leads to positive outcomes between participating group members. The current study identified instances of humor within student-led group therapy sessions and identified potential functions. Study results were derived from data collected during group aphasia therapy sessions within a modified version of an ICAP where the minimum therapeutic dose of 30-hours was delivered within a 1-week accelerated timeline.

 

Methods

This study was a retrospective, between groups cohort design. Participants with aphasia (PWA) were divided into two groups of 4-5 participants (Group A; B) paired with student clinicians. Topics for discussion included introductions, descriptions of favorite vacations, and student-led games. Eight group sessions (4 sessions/group; 50-75 minutes) were timestamped for instances of humor using a definition based on prior work (Sherratt & Simmons-Mackie, 2016). A constant comparative inductive method (CCM; Strauss and Corbin, 1990) was applied by two student researchers to code instances of humor and generate functions for each instance. Following CCM procedures, the six functions of humor from Sherratt and Simmons-Mackie were used to re-code the instances of humor to link these data to current literature.

 

Results

A total of 220 instances of humor (Group A = 78; Group B = 142) occurred during group sessions. Half of the instances of humor in Group A were initiated by student clinicians and 56% for Group B. The CCM yielded six functions of humor: improve likeability, bolster togetherness, build rapport, preserve dignity, deflect tension, and unintended humorous instances. While there was a significant difference between the total number of humorous instances generated by Group A (M = 19.5, SD = 8.1) and Group B (M = 35.5, SD = 6.9); t(6) = -3.02, p = .02, the proportions of humor functions were similar between groups. There was a strong level of correspondence and pattern of results between the CCM and the Sherratt and Simmons-Mackie (2016) method which included the following functions: demonstration of solidarity, managing identity, saving face, a method of avoiding inappropriate topics, attempts to increase likeability, and mitigating disagreements. Increasing likeability was the dominate function for all humorous instances across groups.

 

Conclusions

Improving likeability, building rapport, and bolstering togetherness were the most common humor functions used by PWA and clinicians. PWA also used humor to preserve dignity during moments of communication difficulty. The functions of humor in this study were parallel to prior literature. Future studies can examine the role of humor in enhancing life participation, satisfaction in social situations, and factors that contribute to group dynamics which may influence humor instance frequency. Living with aphasia can impact functional communication skills and quality of life and using humor may provide a strategy for PWA to engage with others leading to increased feelings of inclusion and a greater sense of communication independence.

Cerebral small vessel disease burden: A biomarker for post-stroke aphasia recovery
PRESENTER: Maria Varkanitsa

ABSTRACT. Introduction

Cerebral small vessel disease (cSVD) is a disorder of microvessels that causes a range of abnormalities seen on brain imaging. cSVD is a common neuropathological processing in the elderly, causing two principle, potentially devastating, outcomes in this population: stroke and vascular cognitive impairment and dementia (Wardlow et al., 2019; Zanon Zotin et al, 2021). Despite the high prevalence of cSVD in stroke survivors, its role on post-stroke aphasia recovery has not been systematically examined. In this study we systematically assessed the clinical significance of the global burden of cSVD through a neuroimaging evaluation of white matter hyperintensities (WMH), enlarged perivascular spaces (EPVS), lacunes and global cortical atrophy (GCA) in people with aphasia (PWA) that underwent language therapy.    

 

Methods

Thirty chronic PWA (10F, age: mean=61years, range=40–80 years, education: mean=15 years, range=12–18, time post stroke: mean=52 months, range=8–170 months) due to single left hemisphere stroke (volume: mean=135.21cm3, range=11.66–317.07cm3) completed up to 12 weeks of semantic feature analysis treatment for word retrieval deficits (Gilmore et al., 2018). Mean baseline aphasia severity from the Western Aphasia Battery–Revised (WAB-AQ quotient) was 59.83 (range=11.7–95.2). Baseline T1- and T2–FLAIR-weighted MRI scans were rated for four major cSVD biomarkers, including WMH, EPVS, lacunes and GCA, using validated visual rating scales. Total cSVD burden was rated on an ordinal 0-4 scale, by counting the presence and severity of each of the four biomarkers. To determine the role of cSVD burden on treatment-induced aphasia recovery, we used mixed effects logistic regression with binary naming accuracy as the predicted variable. Our main predictor was the interaction between total cSVD score and session, WAB-R AQ, stroke lesion volume, months post onset and age were included in the model as covariates, and participant and item were included as random factors.   

 

Results

Our participants presented with various degrees of brain changes associated with cSVD. The regression model showed a significant interaction among total cSVD burden and session (p<0.0001). Follow up analyses showed that the predicted probability of accurate naming increased over time more for participants with less severe cSVD. This interaction was significant after controlling for aphasia severity, an also significant predictor (p<0.001), and stroke related factors, including total lesion volume and months post onset.    

 

Conclusions

This work indicates that the severity of cSVD may predict how well PWA will respond to language treatment independent of demographic and stroke-related factors, including initial aphasia severity, such that patients with less severe cSVD are expected to exhibit better treatment outcome compared to patients with more severe cSVD. This is in line with the general premise of neuroplasticity, that is, structural integrity influences language recovery (Kiran & Thompson, 2019), and provides evidence that cSVD, an index of brain reserve (i.e., individual differences in brain structure due to chronic brain pathological changes) constitutes a clinically relevant predictor not only of post-stroke dementia (Mok et al., 2017; Wong et al., 2016) but also of post-stroke aphasia recovery (Varkanitsa et al., 2020).

A complex view of the Grapheme-to-Phoneme Conversion (GPC) procedure: Evidence for vowel developmental dyslexia from a shallow orthography language

ABSTRACT. Introduction

Recent findings suggest that the sublexical route described in the Dual-Route Model is likely to be a multi-layered and feature-based process. This study examined the presence of vowel dyslexia in Italian, a shallow-orthography language. 

Methods 

The new TILTAN-IT reading battery, aimed at assessing specific types of dyslexia, was administered to 609 Italian-speaking children (2nd-8th grade), recruited at their schools. This battery includes lists of Italian words, word-pairs, and non-words selected as sensitive stimuli for each type of dyslexia. Reading errors were coded according to their types, to identify dyslexia types. Errors of substitution, addition, omission, and migration of vowels were coded separately from the parallel errors in consonants.

Results

28 children were diagnosed with vowel dyslexia. They produced significantly more vowel errors (vowel letter omission/substitution/addition/migration) than the controls, but made none or only 1 consonant errors. We also found the opposite pattern, which has not been reported before: 21 children showed significantly more errors on consonants than their age-peers, but made fewer than 2 vowel errors. 

Conclusions 

TILTAN-IT allows one to detect different types of dyslexia in Italian. In particular, the results suggest that despite the highly consistent conversion of vowels from orthography to phonology, it is still possible to identify a specific impairment in reading vowel letters in the sublexical route. We were also able to identify children who showed the mirror-image dyslexia, with errors only on consonants in reading nonwords. These results indicate that the sublexical route is more complex than previously thought, with separate conversion mechanisms for vowel letters and for consonant letters.

Session 6 (permanent): Poster session

Sunday, 4.30pm-6pm: Language and Speech Production; Bilingual language processing and production; Primary Progressive Aphasia

The Role of Phonological Working Memory in Narrative Production: Evidence from Case Series and Case Study Analyses of Chronic Aphasia
PRESENTER: Rachel Zahn

ABSTRACT. Introduction 

Early work showed that semantic, but not phonological, working memory (WM) supports the ability to produce multiword utterances.1,2 Recent results from narrative production at the acute stage of stroke corroborated these findings, with semantic, but not phonological WM, predicting narrative measures of sentence elaboration.3  However, phonological WM was found to have a positive relation with speech rate (words per minute), and a negative relationship with proportion pronouns relative to nouns.3,4 Two hypotheses might explain these relationships: 1) slower, more error-prone phonological retrieval of single words leads to slower speech rate, reduced rehearsal in WM tasks, and a preponderance of pronouns (which are easy to retrieve),3 and 2) the existence of separate input and output phonological WM buffers,4,5,6 where the output buffer supports fluent speech, phonological WM, and the maintenance of longer words. Follow-up analyses supported the phonological retrieval hypothesis; however, input and output WM capacities could not be distinguished in the acute sample.5 The current study evaluates these two hypotheses for individuals with chronic aphasia, where measures of input and output phonological WM were available.

 

Method 

36 participants were tested on narrative production, semantic WM, single word processing and production, and input and output phonological WM. Output phonological WM tasks required list output (e.g., digit span) whereas input tasks required a yes/no judgment (e.g., rhyme probe). Participants’ narrative production was scored using the Quantitative Production Analysis (QPA)6.

In a case series analysis, multiple regressions were used to analyze the role of WM and single word measures in predicting words per minute and proportion pronouns. In a case study analysis, two individuals who showed a striking distinction between input and output phonological WM capacities, while matched on other variables, were evaluated for relevant aspects of narrative production.

 

Results/Discussion 

Words per minute showed a positive pairwise correlation with both input and output phonological WM measures. However, in the multiple regression analysis, only single word phonological retrieval had a significant weight in predicting words per minute, with no independent role for input or output phonological WM, supporting the phonological retrieval hypothesis. Proportion pronouns showed no correlation with output phonological WM and a positive correlation with input phonological WM, opposite that obtained the acute sample. However, the distribution of proportion pronouns showed extremes in both directions, with some participants producing almost no pronouns and agrammatic speech and one participant producing many pronouns and fluent anomic speech. Neither pattern was observed in the acute sample. These patterns in the chronic sample made it difficult to interpret the proportion pronoun results under both hypotheses. In the case study approach, the output WM deficit case showed the predicted effects on narrative production with a slower speech rate, increased use of pronouns and increased phonological errors in production. The differing case series and case study results can be attributed to difficulty in separating input and output phonological WM in the regression approach, given their substantial correlation.  In contrast, the case study results reveal that the two capacities can be separated and have predicted effects on narrative speech.

A pilot normative study for photographs of celebrities in Hong Kong
PRESENTER: Annie Fung

ABSTRACT. Introduction

Psycholinguistic normative data have been facilitative to the research on the underlying mechanism for lexical processing (e.g., Lam, 2009). Increasing number of studies for various norms in Hong Kong Chinese (e.g., frequency of words in Hong Kong Cantonese; Lai, & Winterstein, 2020 and familiarity and age of acquisition (AoA) in naming action pictures; Tse, 2005) have been conducted. In this study, the naming of proper nouns (e.g., Bonin et al., 2012) was targeted by developing a set of norms for celebrity naming based on local native Cantonese speakers in Hong Kong. Specifically, this investigation collects a set of colored photographs of local and international celebrities and obtains ratings of various variables including AoA, facial distinctiveness, familiarity, surname frequency, emotional indicators, as well as behavioral data in naming including accuracy, naming and errors, such as tip-of-tongue (ToT).

 

Method

This study involving recruitment of 48 healthy adults (40-65 years, with a 1:1 gender ratio, stratified into two education groups) is being conducted in three phases.

The first phase generates a list of exemplars of celebrity names that are common, culturally and geographically specific to unimpaired speakers (n=16) across 22 selected occupational categories. Exemplars present in at least 20% of the responses will be selected as potential target stimuli and three photographs per corresponding celebrity will be chosen and standardized.

The second phase examines the face-name agreement of the photos chosen in Phase 1. Another group of participants (n=8) will be asked to imagine a given celebrity’s face and compare the mental image created with the photographs presented for an agreement rating. For each celebrity, the photograph with the highest accumulated score will be used in the third phase.

In Phase 3, the finalized photographs from Phase 2 will first be presented to the third group of participants (n=12), who will be required to verbally tell the first name that came up to their mind as soon as possible. Response time (RT), accuracy (i.e., whether the naming matches with the celebrity’s identity), erroneous responses, and reasons for ‘no response’ (e.g. ToT) will be recorded. Subsequently, names of the chosen celebrities will be presented to the fourth group of participants (n=12) for subjective ratings on familiarity, AoA, face distinctiveness, and affective evaluation, using a seven-point scale.

 

Pilot results

Some pilot data of Phase 1 were collected from four participants (two female and two male speakers, education level not controlled). A total of 242 celebrity names across 22 selected occupational categories were generated (See Table 1). Further data collection is ongoing.

 

Conclusions

The pilot results reinforced the cultural and geographic specificity of celebrity norms, as only 2.4% of the generated exemplars overlapped with those collected based on speakers of British-English (Smith-Spark, 2006). We believe that this study will fill the gap in Chinese psycholinguistic norm studies. As such it distinguishes itself from other reported normative studies in Hong Kong Cantonese and the final deliverables should be useful to researchers who need such information, for example in designing psycholinguistics experiments in Cantonese.

Which word planning processes require attention: evidence from dual-task interference in aphasics speakers

ABSTRACT. Introduction

In everyday life, utterance production is affected under dual-task condition (speaking while cooking or hearing the radio), and this seems to be all the most the case in case of impaired language. It has actually been recognized that utterance planning is not entirely automatic and some processes need attention. Dual-task paradigm have been used to test attentional requirement in word production (Ferreira & Pashler, 2002). Studies showed that conceptualization and lexical selection are under attentional demand (Roelofs & Piai, 2011), and recently, studies carried out with healthy (Cook and Meyer, 2008; Fargier & Laganaro, 2019) and aphasics speakers (Laganaro, Bonnans, & Fargier, 2019) have shown that post-lexical processes (phonological and phonetic encoding) also need some amount of attentional demand. More specifically, an increase of phonological errors has been reported in aphasic participants in a dual-task condition with concurrent auditory stimuli appearing at SOA of +300 ms (Laganaro et al, 2019), whereas lexical errors were not affected by the same dual-task. In the study presented here, we aim at investigating whether other word planning processes (other types of errors) are affected by a concurrent dual-task if auditory stimuli are presented at different SoAs.

 

Methods

Twenty-one participants suffering from aphasia following a left hemispheric stroke (mean age: 59.52) took part to the study as well as a group of 12 matched control subjects (mean age: 56.17) with no history of neurological impairment.

Participants underwent a picture naming task and an auditory (syllable) detection task in isolation (single-task condition) and under dual-task conditions. Under dual-task condition, the auditory stimuli (four different CV syllables) appeared at three SOAs, (+150 ms, +300 ms or +450 ms) corresponding to the time-window associated with lexical, phonological and phonetic encoding in Indefrey (2011). Under dual-task condition, the participants were instructed to name the pictures as fast and accurately as possible, while pressing a key when they heard the syllable /fo/ (associated with filler pictures, discarded from the analyses).

 

Results

Under dual-task condition, both control and aphasic participants were interfered at each SOA relative to single task on production latencies. In the control group, there was no difference in accuracy in dual-task condition relative to single task. Analyses by type of error were performed in brain-damaged participant. The rate of lexical errors (semantic paraphasias, unrelated lexical errors verbal perseveration) was not significantly different between the single and dual task conditions. An increase of phonological errors (phonological paraphasias, neologism) was found at late SOAs (+300 and +450 ms) and an increase of non-responses (omissions errors) at SOA+150 ms.

 

Conclusions

The results confirm that the observation by Laganaro et al. (2019) that only phonological errors increased under dual-task condition was related to the specific SOA used in that study. The increase of omissions errors and phonological errors at specific SOAs associated respectively to  underlying lexical (SOA150 ms) and post-lexical (300 and 450 ms) encoding processes confirm that attentional resources are involved at all encoding processes leading respectively to an increase of omission errors and of phonological errors.

Developmental Proper Name Anomia
PRESENTER: Yaara Petter

ABSTRACT. Introduction

The ability to accurately and efficiently retrieve proper names is of great importance in human communication. There is much evidence supporting the claim that proper and common names are retrieved via distinct processes, including compelling cases of acquired proper-name anomia without common-name anomia (e.g., Cohen et al., 1994; Otsuka et al., 2005; Semenza & Zettin, 1988,1989). This study describes the first in-depth cases of developmental proper-name anomia and examines in detail the nature of the impairment, its locus in the name retrieval process and sheds a light on specific aspects of proper-name retrieval process.

 

Methods

The participants were ten individuals aged 30-49, who reported considerable difficulties in retrieving people’s names since childhood. Proper-name retrieval was assessed using a person-picture naming task (155 items) and naming-to-definition tasks (46 items), both adapted to the age-group of the participant (as persons whose faces and names are familiar to 30 year-olds are not the same as the ones known to 49 year-olds). Faces the participant reported were unfamiliar were removed from further analysis; retrieval failure was only coded in case the participant knew the person but could not retrieve their name. Common-name retrieval was assessed from picture (193 items, Biran & Friedmann, 2005) and from definition (33 items). Performance was compared to aged-matched controls (N=39) using Crawford&Howell’s t-test (1998) and a dissociation analysis (Crawford & Garthwaite, 2005).

 

Results

Six participants showed selective developmental proper-name anomia. Their proper-name retrieval was significantly below the control group (p<.05), and their common-name retrieval was within the control level. They showed a classical dissociation between these tasks (p<.05) and were further tested for the extent and functional locus of their deficit, retrieval of semantic information from different modalities and more. The difficulty was mostly consistent in different input and output modalities and in verbal fluency tasks.

In the fluency tasks (retrieval within one minute), all participants performed below control in celebrity names retrieval by profession but were at or above control average in common-names (animals and vehicles), with significant interaction group*task (p<.001). They performed similarly to controls in the retrieval of names by group (e.g., Italian names) and as poor as controls when retrieving names by their meaning (e.g., names of people that are flower names-daisy).

Many Hebrew names are also common-names (“Gal Gadot” means “wave, river-banks”), which allowed us to compare the participant’s retrieval of the same phonological sequence as a proper-name and as a common-name. They were significantly better at retrieving the same phonological-sequence as a noun than as a name.

 

All six participants were able to access specific semantic knowledge about a person, while unable to access their name. Reading of irregularly-written names showed pseudohomophone-name effect, indicating ability to access the phonological lexicon containing people’s names. Together these findings points to a name-specific deficit in the access from the person-specific semantics to the name in the phonological lexicon.

 

Conclusions

The results highlight a scarcely-reported developmental difficulty in retrieving people’s names, with intact common name retrieval. The data further shows a dissociation between the retrieval of the exact same phonological sequence as a name and as a common-noun.

How do people with aphasia describe their word-finding difficulties? Metaphor analysis of written accounts.
PRESENTER: Bethan Tichborne

ABSTRACT. Introduction

Aphasia is a heterogeneous disorder: variation in language deficits, severity, and cognitive comorbidities all contribute to a complex picture. We should then expect that subjective experiences of aphasia are equally diverse. There is little exploration of this, perhaps because of the methodological difficulties of investigating a hard-to-describe experience in a population with communication impairment. One way to communicate complex experiences is through metaphor. Littlemore (2019) shows that metaphor analysis can provide valuable, clinically useful insights about disorders which impact on cognition and language use, such as schizophrenia and autism. Several studies have examined the experience of aphasia rehabilitation through metaphor analysis of interview transcripts (e.g. Ferguson et al, 2010). There are no studies addressing the experience of language impairment itself. This study aims to discover how people with aphasia use metaphor to describe word-finding difficulties in retrospective written accounts, and to consider the implications for clinical practice.

Methods

Thirteen accounts of aphasia were selected, representing a wide range of aetiologies, social histories and impairment. All descriptions of word-finding and use were identified, and metaphors identified and coded in these selections. This allowed for patterns of metaphor usage to be explored within and across the texts, following a discourse-based approach (Semino, 2008). Cameron and Maslen’s (2010) adapted version of the Pragglejaz (2007) procedure was used to identify metaphors in these texts. Key passages were identified in each account for further analysis. The metaphors in these selections were coded into vehicle groups, and patterns of metaphor usage were explored within and across the texts.

Results

​​​​​​​8146 metaphorical expressions referring to language use or impairment were identified in total, 4056 concerning expressive language (in writing, speech and thought). A number of source domains were used across all or most texts and constituted a majority of the instances of metaphorical language. The most common source domains (across all modalities) were PHYSICAL OBJECTS (1175), PERSONIFICATION (737), JOURNEY/LANDSCAPE (543) and CONTAINER (496). The use of these source domains to describe WFD showed a basis in the frameworks common in description of unimpaired language (Semino, 2008), with extension or elaboration to communicate salient aspects of the aphasic experience.

Conclusions

The findings demonstrate that the metaphors used by people with aphasia to describe word-finding difficulties include metaphors non-aphasic people use to talk about words and word-finding, as well as more novel expressions. Variation across the texts suggests that the experience and conceptualisation of aphasia may be influenced by social history and/or aphasia type. Metaphors used by an individual about early recovery can provide a conceptual framework within which to interpret later changes. Novel metaphors or creative elaboration of conventional metaphors highlight aspects of aphasia that usual ways of talking about language may fail to capture. Exploring individual conceptualisations of language impairment with people with aphasia may therefore be useful for a meaningful, collaborative approach to therapy.

Experimental artefacts in aphasia research: How experimental variables raise semantic over phonological errors in conduction aphasia

ABSTRACT. Introduction

A long-standing pretension of case studies in aphasia research is to follow experimental procedures that warrant the results obtained and allow generalization. Despite this, as our knowledge of the relevance of different characteristic of the stimuli (frequency, concreteness) and experimental conditions (blocks, repeated naming, memory load) get moving, it is evident that some unexpected patterns described in the literature are easily explained as consequence of lack of experimental control. Namely, the STEPS constitutes a behavioral pattern in which people with aphasia produce more phonemic (phonological) errors with non-number words (e.g., tale → lale) whereas more semantic errors with numbers (e.g., 42 → 13) (Dotan & Friedmann, 2015). Currently, STEPS is explained by the Building Blocks Hypothesis, an account that locates the emergence of the semantic errors in the phonological output buffer (POB). Recently, we showed evidence that STEPS was not related to the damage of the POB (García-Orza et al., 2020). However, here we explore the nature of the STEPS from an interactionist perspective (e.g., Martin et al., 1996). Interactionist models would allow to explain the emergence of semantic errors –over phonemic errors– when assessing numbers, since they are high-frequency elements which are presented in semantically homogeneous lists under conditions of increased cognitive (memory) load (e.g., numbers of increasing length). Specifically, we compare the production of multidigit numbers (composed of high-frequency number words) with the production of sequences (2-4 words) of high-frequency vs low-frequency colors. It is hypothesized that more semantic errors will arise in high-frequency color sequences, whereas more phonemic errors will arise with low-frequency sequences. It is also expected that memory load facilitates the appearance of these errors.

 

Methods

Two female patients with conduction aphasia – ML, of repetition variety (phonological input buffer) and DNR, of the reproduction variety (POB) – were assessed in three production tasks (naming, reading and repetition) with multidigit numbers (e.g., 452) as well as with high-frequency and low-frequency color sequences (e.g., green-red-blue and lilac-mallow-beige, respectively).

 

Results

Both patients committed more semantic than phonemic errors while producing numbers and high-frequency color sequences, in both cases phonemic errors were scarce. On the contrary, phonemic errors arose while producing sequences of low-frequency colors. Additional analyses on the length evidenced – for both patients – an increase of semantic errors for numbers and high-frequency colors while producing longer sequences. Both phonemic and formal errors showed non-significant differences across lengths, only a tendency to increase in one patient (DNR).

 

Conclusions

Our results indicate: a) frequency plays a role in the emergence of semantic (high-frequency) vs phonemic (low-frequency) errors; b) the emergence of errors is directly proportional to memory load as indexed by the number of words in the sequence. These data support that the STEPS effect seems to be an “experimental artefact” defined by the interaction of different variables such as lexical frequency, semantic context, and memory load during speech production. Our findings open a window to the discussion on how speech errors are given birth in aphasia and how they can be manipulated.

Picture Naming and Word Retrieval Deficits in Patients with Epilepsy
PRESENTER: Stephanie Ries

ABSTRACT. Naming deficits have been documented in patients with temporal lobe epilepsy (TLE, Bell, 2001). The left temporal lobe is known to host several core language functions needed for picture naming, including semantic, lexical, and phonological access, selection, and encoding. However, word retrieval deficits in patients with TLE have not been investigated in detail using paradigms tailored to look at the semantic interference effect, arising from the activation of non-target semantically-related lexical items.

This study included 9 patients with intractable TLE, of which 7 had left TLE. Every patient underwent a neuropsychological evaluation prior to electrode implantation in preparation for resective surgery, including cognitive and linguistic test batteries, and in particular the Boston Naming Test (BNT). After implantation, they participated in a continuous naming task in which they were asked to name intermixed pictures from different semantic categories (Howard et al., 2006). Naming latencies have been found to be longer the more items from a given semantic category have been named previously. This cumulative semantic interference effect (CSIE) has been linked to increasing difficulty in word retrieval caused by increasing interference from semantically-related items. Each patient’s epileptogenic zone was determined by identifying regions where intracranial electrodes showed epileptic activity throughout testing. Participants' RT averages and CSIE slope on our naming task were compared to those of typically developing adults from Ries et al. (2015).

Our results show that out of the total 9 participants tested, 5 had average RTs greater than 1.5 standard deviations above the controls’ average RT. Of those 5 participants, 4 scored in the impaired range on the BNT as well. All 5 patients had epileptogenic zones in the left temporal lobe, consistent with previous studies. Concerning the CSIE, of all the participants from our study, only one participant (SD13) showed a steeper semantic interference effect when compared to controls. The slope of the CSIE for this patient was 3.43 standard deviations above the mean slope for the controls, suggesting increased word retrieval difficulty. This patient’s epileptic profile differed from that of the other patients in that this patient’ brain showed multiple epileptic foci in addition to the left hippocampal focus. Moreover this patient had an anterior right frontal lobe lesion caused by chronic bleeds. 

Our results align with the literature in finding further evidence that individuals with left TLE have naming deficits. However, the slope of the CSIE in most of our participants with TLE was similar to that of the controls, suggesting that even when the left temporal lobe is epileptogenic, word retrieval as indexed by the CSIE is not necessarily impacted. The exception was patient SD13 who had a very large CSIE and who had diffuse abnormalities in the brain, suggesting that word retrieval as indexed through the CSIE involves a network of brain areas (de Zubicaray et al., 2015), and that damage to more than one of these regions is necessary to impact the CSIE.

Impairment of Neural Oscillatory Mechanisms of Speech Motor Planning in Aphasia

ABSTRACT. Introduction

Aphasia is an acquired communication disability commonly resulting from post-stroke damage to the left-hemisphere brain networks. Depending on the size, location, and type of the stroke, individuals with aphasia exhibit a wide range of behavioral symptoms such as disorders in speech fluency, auditory comprehension, word-finding, and speech repetition. Recent investigations have provided evidence that such deficits in aphasia may result from damage to lower-level brain networks implicated in speech production and motor control mechanisms that are not directly influenced by language-related neural processes [1-3]. In the present study, we investigated the neural oscillatory correlates of speech impairment in individuals with post-stroke aphasia.

 

Methods

A total of 34 subjects with post-stroke aphasia (22 males; age range: 42-80 yrs; mean age: 61.2 yrs), and 46 neurologically intact control subjects (23 males; age range: 44-82 yrs; mean age: 63.6 yrs) completed a speech vowel production task under altered auditory feedback (AAF) while EEG signals were simultaneously recorded from 64 scalp electrodes following a standard 10-10 montage. All subjects with aphasia were tested at least 6 months post stroke and had undergone testing with the Western Aphasia Battery (WAB) [4]. Based on the WAB aphasia classification system, the distribution of aphasia types across the 34 subjects was as follows: Anomic = 7; Broca’s = 18; Conduction = 8; and Global = 1. Subjects in the control group had no history of speech, language, or neurological disorders. Subjects in both groups passed a binaural hearing screening and had thresholds of 40 dB or less at 500, 1000, 2000, and 4000 Hz. For this study, EEG data were analyzed to measure modulation of beta band power of neural oscillatory activities within 13-25 Hz frequency ranges before and after the onset of speech vowel production under normal auditory feedback (NAF).

 

Results

Results of the preliminary analysis indicated deficits in the neural oscillatory mechanisms during the planning phase of speech production in individuals with post-stroke aphasia compared with controls. This effect was indexed by the reduced magnitude of beta band de-synchronization (i.e. power reduction) before the onset of speech as well as an earlier onset of power reduction in aphasia vs. controls.

 

 

Conclusions

Beta band de-synchronization has been suggested to play a key role in regulating motor planning and production neural processes in a wide range of behaviorally relevant tasks. This effect is proposed to arise from the interplay between thalamo-cortical networks that selectively activate task-relevant motor areas by priming prefrontal cortical neurons via reducing their excitability threshold before the onset of movement [5]. Findings from the present study suggest that individuals with aphasia exhibit deficits in engaging such neural processes to activate cortical motor neurons for speech production as indexed by their pathologically altered patterns of neural oscillatory mechanisms driving beta band de-synchronization. Our preliminary analysis show that individuals with aphasia have deficits in regulating both the timing and the overall power of beta band de-synchronization before the onset of vowel sound production, suggesting deficit in the underlying neural mechanisms during the planning phase of speech.

Motor speech planning versus programming in Apraxia of speech
PRESENTER: Marion Bourqui

ABSTRACT. Introduction

 

There has been a long debate in the literature about speech production models for several years. Some authors propose a one-step model between phonological encoding and articulation (e.g. “phonetic encoding” in Levelt., 1989) while others include two processes allowing the transformation of a linguistic code into a motor program, (Guenther, 2016; Van der Merwe, 2020) sometimes called “motor speech planning” and “motor speech programming”. The latter models are based on observations of the pathology. Indeed, a broad consensus has emerged in the literature that apraxia of speech (AoS) involves impaired ability to retrieve and/or assemble the different elements of the phonetic plans (Blumstein, 1990; Code, 1998; Varley & Whiteside, 2001; Ziegler, 2008, 2009), and the impairment has been located in the motor speech planning processing stage. A different locus has been attributed to dysarthria, which underlying impairment has been located in the motor speech programming processing stage. There is however very limited empirical evidence in favor of two distinct processing stages transforming a linguistic (phonological) code into articulation.

In the present study, we sought to target (a) motor speech planning, via the comparison between the production of legal and illegal CCV clusters and (b) motor speech programming, via the manipulation of uttering conditions. These two manipulations will be crossed with two groups of participants with different types of motor speech disorders, AoS and  who are expected to present an opposite pattern of performance.

 

Methods

Participants : 4 participants suffering from AoS following a left hemisphere stroke ; 4 participants suffering from hypokinetic dysarthria (Parkinson’s disease – PD)

Material and procedure : stimuli consist of bisyllabic pseudo-words matched on first phoneme and second syllable and varying on the first syllable structure and legality (CV, legal CCV and illegal CCV). A delayed production task was used to separate linguistic from motor speech encoding. The participant had to produce the target stimuli as fast and accurately as possible under two uttering conditions: normally or whispering.

 

Results

Accuracy was coded by two independent raters (inter-rater agreement between .926 and .829, almost perfect agreement (Kappa statistics, Landis & Koch, 1977)).

The accuracy was fitted with mixed models (Baayen, Davidson, & Bates, 2008) with the R-software (R-project, R-development core team 2005). Results showed an effect of uttering conditions with decreased performance in the “whispering” condition compared to normal speech only in the PD group and an effect of CV structure in both groups with an interaction showing larger effect in AoS.

 

Conclusions

Our preliminary results on 8 participants indicate the expected opposite pattern in participants with AoS and dysarthria: uttering condition, which is supposed to be parametrized at motor programming only affected performance in participants with dysarthria while clusters and in particular illegal CC affected much more performances in AoS. These results bring further support to models of speech production that propose two processing stage of speech.

Production of Argument Structures by Chinese Post-Stroke Aphasics
PRESENTER: Guanqing He

ABSTRACT. Introduction

Verb deficits are commonly observed in people with aphasia, and argument structures have been one of the key focuses in many studies (Kim & Thompson, 2004; Dragoy & Bastiaanse, 2010; Wang Honglei, 2015). Verbs with a bigger number of arguments are found to be more difficult for agrammatic aphasics (De Bleser & Kauschke, 2003; Wang & Thompson, 2016), supporting the Argument Structure Complexity Hypothesis (Thompson, 2003). However, some studies also find that this hierarchy does not apply to agrammatic aphasics with different language background in different tasks (Luzzatti, 2002; Thompson et al., 2012). Moreover, previous research mainly focuses on European-language-speaking aphasics, and few have investigated Chinese-speaking aphasics. The current study aims to investigate argument structures in the production by Chinese-speaking aphasics, taking into account factors such as aphasia type, onset-time, and aphasia severity, etc.

Methods

Participants

Forty-two Chinese post-stroke aphasics participated in this study, who are divided into two groups: 11 agrammatic aphasics and 31 non-agrammatic aphasics. Twenty-seven age- and education-matched healthy adults are included as controls.

Procedures

A spontaneous picture description task was administered to all participants, which is a  subset of the Western Aphasia Battery-Revised (Kertesz, 2006). All participants were asked to use sentences to describe the picture in two minutes. The accuracy of verbs with one to three arguments are calculated for further analysis.

Analysis

Because no participants, except for one control, produced any three-place verb, our analysis only focused on the production of one-place and two-place verbs. Chi-square Test  was used to explore whether there is a significant difference between one-place and two-place verbs for agrammatic aphasics. Multi-factor ANOVA was used to explore three factors’ effects on the production of the verbs, i.e. aphasia type, degree of aphasia severity and time post-onset.

Results

Agrammatic aphasics exhibit no difference between the two types of verbs (p>.05). Onset-time has significant impact on the accuracy of two-place verbs (p<.05) while degree of aphasia severity and aphasia type have no significant impact on two-place verbs.

Conclusions

The findings in the current study provide disconfirming evidence for previous studies. Our data do not support the “Argument Structure Complexity Hypothesis”. Onset-time has a significant influence on the accuracy of two-place verbs. Language backgrounds and task types may be influencing factors in the accuracy of verbs with different number of arguments.

Clinical application of the Slovenian naming test: a pilot study in aphasia

ABSTRACT. Introduction

Lexical processing is defined as manipulation of units (lexemes) in a mental dictionary. A typical example is the ease with which we search for lexemes during spontaneous speech. Its complexity often becomes apparent in individuals with acquired language disorders, such as aphasia (Field, 2004), caused by neurodegenerative diseases or brain damage (Azhar, 2016). One of the most common symptoms of various types of aphasia is the inability to name things (Kirshner et al. 1984). Therefore, a naming test, such as the Boston Naming Test (Goodglass et al., 1966), is often used as part of instruments to assess language ability (see Rohde et al. 2018). Given (i) the language-specific effects of priming in lexical access, (ii) the effects of age of acquisition and lexeme frequency, and finally (iii) the effects of lexeme length, phonological and morphological structure, existing naming tests cannot be directly translated from language to language (see Chan et al., 2014) but need to be adapted.  

Methods

We developed a Slovenian naming test with 60 full-colour illustrated items that were balanced according to the selected characteristics of the lexeme the were supposed to elicit, namely: the number of phonemes (5x3, 10x4, 10x5, 10x6, 10x7, 10x8, 5x9), ratio between vowels and consonants, average age of acquisition, and frequency within the corpus of spoken Slovenian "GOS" (citation). Before standardizing the test, we conducted a pilot study with 26 subjects from the clinical group who had recently suffered a cerebrovascular event of the left hemisphere (diagnosis code according to IMB-10:R47.0) and were diagnosed with aphasia. They were matched to 26 healthy subjects according to education, first language, gender and age (N= 2x14 women + 2x12 men, mean age 70 years, SD =12).

Results

Subjects in the clinical group scored an average of 67.04 points (55%) on the test, while subjects in the comparison group scored statistically significantly (p=0.001) and reliably (α=0.95) higher, at 90.62 points (76%). Analysis of demographic data showed that males were more accurate than females by 1.33 points (1.1%), but according to the t-test for independent groups, this difference was not statistically significant (compare to Zec et al., 2007 and Hall et al., 2012). In the ANOVA analysis, no statistically significant differences were found with respect to different levels of education (p=0.056), which is unexpected and most likely due to an unevenly distributed sample with respect to this variable. The sample was finally divided into below- and above-average groups according to mean age, and the t-test for independent groups showed that the latter performed statistically significantly worse (p=0.006) - which is consistent with previous studies (e.g., Albert et al., 1988). Except for the length, the internal structure of lexeme did not correlate with naming performance.

Conclusions

In this paper, the results of the pilot study are presented in more detail and interpreted in the light of the data from the standardization of the test STIB. Data collection for the Slovenian adult speakers has been completed, while recruitment for the Slovenian children is ongoing.

Automated Verbal Self-Feedback for Improving Speech Fluency in Patients with Mild Chronic Nonfluent Aphasia
PRESENTER: Gerald Imaezue

ABSTRACT. This study developed and tested a novel automated method for applying verbal self-feedback for training patients over multiple sessions. Verbal self-feedback is a novel approach to treatment that uses recursive post-production feedback to fine-tune speech production. This approach is based on studies on the facilitatory role of recursive auditory feedback in speech production and vocal learning in humans and songbirds respectively. We used a cross-over design to compare the effect of verbal self-feedback (experimental treatment) to that of script training (control treatment) on speech fluency in two patients with chronic mild nonfluent aphasia (AE2 and AE3). We developed a novel smartphone App for feedback training and measured speech fluency in two participants that received both experimental and control treatments sequentially, in counterbalanced order. Each treatment comprised two-hour daily sessions over three weeks, with two weeks washout between treatments. Both treatments used personalized scripts, each comprising eight sentences with varying lengths determined by the participants performance in practice sessions. Direct treatment effects were measured by comparing speech fluency measures (speech initiation latency, speech duration, speech rate and articulation rate) of the first day of treatment with the last day of treatment on trained scripts. Multiple-baseline assessments (three times per assessment phase) of speech fluency measures from sentence repetitions of untrained scripts were used to determine generalization of treatment effects. Nonoverlap of All Pairs was used to estimate effect size. The results showed significant direct and generalization of treatment effects for both treatments on all measures (speech initiation latency, speech event duration, speech rate and articulatio rate in AE2. Similar gains were seen in AE3 but there was no improvement in speech initiation latency following script training. In conclusion, both participants showed improvement on most of the measures following both treatment blocks. Verbal self-feedback may be a promising tool to improve speech production efficiency in nonfluent aphasia.

 

On the relation of semantic context effects in picture naming and semantic categorization: Evidence from aphasia
PRESENTER: Antje Lorenz

ABSTRACT. Introduction

The continuous picture-naming paradigm requires naming of several members of different semantic categories (e.g., clothes: blouse, skirt, glove, hat, shoe) in a seemingly random order, separated by 2 to 8 unrelated objects (e.g., Howard et al., 2006). Naming latencies increase in a linear fashion with each additional category member. This effect is assumed to be located at the lexical level of language production (e.g., Howard et al., 2006; Oppenheim et al., 2010). A cumulative effect is also observed in a receptive semantic categorization task but here facilitation is observed, suggesting a common conceptual-semantic origin of both effects (Belke, 2013). Cumulative semantic interference in participants with aphasia (PWA) has so far only been reported for error data (Harvey et al., 2019). In our study, PWA completed a picture-naming task and a categorization task with identical materials. We tested whether the pattern observed in unimpaired adult speakers can be replicated in PWA. Furthermore, we tested whether and in which way the two effects are related, that is, whether cumulative interference in picture naming can be predicted by cumulative facilitation in the receptive task.

 

Methods 

Eighteen participants with aphasia (PWA) were included. All participants suffered from mild word-finding difficulties, resulting from a deficit of lexical access, while conceptual- and lexical-semantic processing were largely preserved. Mainly vascular etiologies were involved leading to a circumscribed chronic and non-progressive lesion in the left hemisphere. Participants first completed a picture-naming task and after around one week they completed a picture-categorization task in which they indicated via button-press whether depicted objects were man-made or natural entities. The stimulus set reported here consisted of 130 pictures of objects, including 90 experimental targets, 30 fillers and 10 practice items. All targets were monomorphemic nouns, which belonged to 18 different semantic categories (e.g., clothes, animals) with five members, each. For both tasks, response accuracies and reaction times were measured. All picture-naming responses were recorded and transcribed, and speech onset latencies were determined using “Praat” (Boersma & Weenink, 2021).

 

Results

Response accuracies were relatively high (proportion, naming errors: 13.7 %, range: 1.1 - 35.6 %; proportion, categorization errors: 12.7 %, range: 2.2 - 46.7 %).

On average, picture-naming latencies within semantic categories increased by 69 ms from one ordinal position to the next, reflecting cumulative semantic interference (t= 3.504, p = 0.001). In the categorization task, participants’ response latencies systematically decreased within categories (on average by 58 ms between category members), revealing cumulative facilitation (t= -8.395, p < 0.001).

 

Conclusions

PWA who suffer from post-semantic deficits of lexical access show strong cumulative semantic interference. How their conceptual-semantic processing influences this effect is currently analyzed. Analyzes of cerebral lesion patterns in relation to the individual performance are currently performed. Theoretical and clinical implications will be discussed. 

Noun-verb semantic distance analyses in sentence production of Alzheimer’s disease
PRESENTER: Jee Eun Sung

ABSTRACT. Introduction

The current study analyzed the sentence production behaviors of individuals with AD using the DementiaBank, which is a part of the TalkBank projects. The purpose of this study is to apply a machine-learning approach to analyze the noun-verb semantic distance of AD in a sentence construction task using the DementiaBank. We further analyzed verb clusters with those nouns and investigated whether the verb clusters are associated with demographic factors(age, education, and dementia severity).

 

Methods

We extracted the data of 99 probable AD on the sentence construction task from the DementiaBank(Becker et al., 1994). Participants were asked to construct a sentence with given words(pencil, tree) which is similar to Altmann’s(2004). To investigate the differences in semantic distances of different text corpora, we performed an independent samples t-test on the semantic distances for the two groups(DementiaBank vs. Wikipedia (or Blog)). In addition, to identify whether the verb clusters are associated with demographic factors(age, education, and dementia severity), we conducted a stepwise logistic regression.

 

Results

  1. Semantic Distance between the target noun and verb

1.1 DementiaBank vs. Wikipedia database

For the analysis of the ‘pencil’, the noun-verb semantic distance, as indexed by the cosine similarity, was statistically higher in the DementiaBank than Wikipedia(t596 = -5.050, p = 5.881e-7), indicating that the semantic distance between the noun and verb is closer in the DementiaBank than the Wikipedia. For the ‘tree’, the cosine similarity was statistically higher in the DementiaBank than Wikipedia corpora(t24.98 = -7.888, p = 3.053e-8).

 

1.2 DementiaBank vs. Blog database

For the analysis of the ‘pencil’, the cosine similarity was statistically higher than Blog(t199 = -3.702, p = 2.764e-4); therefore, the DementiaBank had a higher proportion of verbs with semantic distances. For the 'tree', the cosine similarity is statistically higher in DementiaBank than the Blog(t946 = -5.289, p = 1.526e-7).

 

 

  1. Verb Clustering and Regression Analyses

We found that education has a significantly positive(B = .661, Wald = 6.871, p = .009) effect on the choice of Write(baseline) or Be Verb for the ‘pencil’, but a marginally positive(B = .325, Wald = 3.553, p = .059) effect on the choice of verb ‘write’(baseline) or ‘use’ for the noun ‘pencil’. For MMSE scores, we found a significantly negative(B = -.294, Wald = 6.674, p = .01) effect on the choice of verb, either ‘write’(baseline) or ‘be’, for the ‘pencil’ and a significantly positive (B = .274, Wald = 3.921, p = .048) effect on the choice of either ‘be’(baseline) or ‘grow’ for the noun ‘tree’.

 

Conclusions

The current study found that the semantic distance between nouns and verbs is shorter in AD populations compared to the existing big databases. Furthermore, the semantic weight of verbs that AD participants used in a sentence construction task was significantly related to the severity of dementia, indicating that people with AD tend to use more light verbs as their disease progresses. We applied machine-learning techniques to the open-access big database under the framework of examining linguistically finite deficits in AD patients’ sentence production.

Aging effects on the verb fluency measures using the semantic weight-based analysis
PRESENTER: Sujin Choi

ABSTRACT. Introduction

The verb fluency task is a sensitive tool for discriminating aging-related neurodegenerative disease(Alegret et al., 2018). The current study focused on the semantic dimension of verbs by categorizing verbs based on the semantic weight(heavy vs. light) from a verb fluency task. The purpose of the study is to investigate whether normal elderly adults demonstrated any differentially greater difficulties depending on the verb types compared to the young group. We further examined clustering and switching behaviors of verbs based on the semantic weight.

 

Methods

A total of 115 Korean-speaking individuals(55 young, 60 elderly) participated in this study. All participants performed within the normal range on The Korean version of Mini-Mental State Examination(K-MMSE; Kang, 2006) and Seoul Verbal Learning Test(SVLT; Kang et al., 2012).

Participants were asked to produce as many verbs as possible within 60 seconds. The performance was analyzed according to semantic weight of verb(heavy vs. light), the number of cluster, mean cluster size and the number of switching. Scores of digit span task were administered as working memory measures.

 

Results

1. Total numbers of correct verbs

Two-way mixed ANOVA (group x verb type) revealed that the two-way interaction was significant, F(1,113)=31.259, p<.0001, η2partial =.217, indicating that the elderly group produced fewer numbers of heavy than light verbs.

.

2. Number of clusters and Mean cluster size

For the number of clusters, two-way mixed ANOVA revealed that the main effects were significant for verb type(Heavy > Light), F(1,113)=78.498, p<.0001, η2partial =.410, and group(Young > Old), F(1,113)=5.182, p=.025, η2partial=.044.

For the mean cluster size, the two-way interaction was significant, F(1,113)=31.178, p<.0001, η2partial =.216, indicating that elderly adults presented smaller mean cluster size in the heavy than light verbs.

 

3. Number of Switches

One-way between-subject ANOVA revealed that the elderly group(Mean=1.95, SD=2.34) demonstrated significantly fewer switching than younger adults(Mean=2.95, SD=2.46), F(1,114)=5.006, p=.027.

 

4. Stepwise Regression Analyses

We performed stepwise regression analyses for each dependent measure of verb types with age, years of education, and scores of K-MMSE, digit span, SVLT as independent variables. The years of education was the significant predictor that was included into all dependent measures across the board for the heavy verbs.

 

Conclusions

Elderly adults demonstrated differentially greater difficulties in generating heavy than light verbs compared to the younger group. Considering that heavy verbs carry more semantically complex features than light verbs, it is likely that more semantic units need to be activated to retrieve the verbs with heavy semantic weight. Education was the significant predictor that was included into all dependent measures across the board for the heavy verbs. Education is one of the critical factors that are associated with cognitive reserve(Stern, 2006), suggesting that individuals with greater cognitive research are less vulnerable to cognitive decline. The current results indicate that the abilities to retrieve semantically heavy verbs are more vulnerable to aging and education may account for their decreased abilities as a cognitive reserve. Semantic-based analyses can provide additional qualitative analyses of the verbal fluency task to detect the aging effects and their related demographic factors.

Bilingual people with aphasia: Do error patterns in picture naming differ across languages?
PRESENTER: Mareike Moormann

ABSTRACT. Introduction

People with aphasia often exhibit lexical access deficits, which have been systematically examined in monolinguals (Dell & Schwartz, 2007). Systematic investigation of the characteristics of such deficits in bilinguals is required (Khachatryan et al., 2016).

Research signposts that (a) error types in naming in bilinguals are coherent with error types observed in monolinguals whereas additional ‘wrong language’-naming errors occur (Roberts & Deslauriers, 1999), (b) error rates and patterns in bilinguals are mostly coherent across languages (Ravi & Chengappa, 2014).

However, differences between languages in naming errors in bilinguals will only be identified by more precisely, and detailed studies including influencing factors (Khachatryan et al., 2016).

Therefore, this case-series study investigates picture-naming errors in bilinguals within and across languages to expose differences by discussing influential factors, entailing the bilingual language profile (e.g., proficiency, age-of-acquisition), language characteristics (e.g., similarity, distance), language impairment, linguistic variables with influence on naming (e.g., frequency, word length). Patterns will be used to extend theories of bilingual speech production (Kroll et al., 2010) and its breakdown in aphasia.

 

Methods

Five sequential bilingual speakers with aphasia and word retrieval impairments were recruited (mean=66years, SD=7.54, languages: Dutch-German [P1+P2], English-German [P3], French-English [P4], English-French [P5]), including one participant (P4) classified as an early bilingual (>12years).

Bilinguals named ~350 object pictures from MultiPic (Duñabeitia et al., 2017) with at least 80% name agreement, in each of their languages, counterbalanced over four sessions. Responses were coded for accuracy and error type. We analysed the distribution of errors across languages. Participants’ bilingual language profiles were assessed by the BAT (Paradis, 1987), LEMO (Stadie et al., 2013) and LEAP-Q (Marian et al., 2007).

 

Results

All participants showed greater naming accuracy in their dominant language (three significantly so) regardless of whether this language was the first or second language acquired. One participant displayed the same error patterns (P4), all others showed different error patterns across languages. To understand error distribution across languages, we will perform linear regression and correlation analyses.

 

Conclusion

Naming accuracy was greater in the dominant language of all bilingual participants. Therefore, dominance seems to predict naming accuracy. The influence of age-of-acquisition remains unclear since four participants were late and only one participant (P4) was an early bilingual. P4 was the only participant with the same error pattern across languages. Further exploration of this variable is therefore necessary.

Additionally analyses will be conducted to classify the participants’ patterns, entailing various factors mentioned above. Results will extent current theories of (impaired) bilingual speech production.

Treatment in bilingual people with PPA: Evidence-based practice or trial-and-error?
PRESENTER: Taryn Malcolm

ABSTRACT. Introduction

There is little published research on treatment of bilingual people with primary progressive aphasia (PPA). To date, we know of only two case studies, both with participants who have the logopenic subtype, and both who received treatment in their later-acquired and post-morbidly more impaired language (Meyer et al., 2015; Lerman et al., under review). Thus, clinicians who treat such patients are unable to apply evidence-based practice, and often rely on studies of post-stroke aphasia. However, the decline of two languages in a bilingual person with PPA differs from that of post-stroke language impairment, because of the underlying brain injury (Malcolm et al., 2019). Furthermore, treatment for PPA is different in nature to that of stroke-aphasia, encompassing both prophylaxis and remediation, and thus goals must be set that are specific to PPA (Meyer et al., 2020) as well as specific to the patterns of actual and expected decline in each language (Costa et al., 2019; Malcolm et al., 2019).

 

Methods

Lerman et al. (under review) conducted a longitudinal case study investigating the effects of a verb-based semantic treatment on the two languages of an English-Hebrew bilingual with logopenic PPA. Verb Network Strengthening Treatment (VNeST) was provided in the later-acquired and post-morbidly more-impaired Hebrew, while language skills were assessed in both Hebrew and English before and after treatment. We assessed whether decline continued or was halted during the treatment period, and whether any halt in the decline was specific to targeted lexical retrieval skills in the treated and/or untreated language.

 

Results

 

Overall, WAB deterioration continued during the treatment block (as seen by the last two time-points on each graph that measure pre- and post-treatment scores), especially for repetition in both languages and production subtests in Hebrew. However, during the treatment block, there was no significant decline for lexical retrieval skills in either language at the word and sentence levels, or for written narratives. Lexical retrieval skills during oral narrative production continued to decline in both languages. These preliminary results indicate that VNeST was partially effective as a prophylactic treatment in both Hebrew and English in this pre-morbidly highly proficient participant with logopenic PPA.

 

Conclusions

Clearly, there is minimal evidence-based practice in this field. To date, our study (Lerman et al., under review), together with the previously published study (Meyer et al., 2015) indicate similar results: for a person with logopenic PPA, prophylactic treatment in their later-acquired language will likely be effective in that language and may also be effective in the untreated language. However, limitations in recruiting and treating this population are acute and certain issues need to be considered when planning future studies in this field, such as PPA subtype, time post-onset at the start of intervention, and how to accurately measure decline vs. stability of language skills.

Lexical retrieval in diglossic aphasia

ABSTRACT. Introduction

While bilingual aphasia has recently gained more interest in aphasiology (Grunden et al., 2020; Kiran & Gray, 2018), there is almost no research in aphasia among the diglossic population, i.e. persons speaking a standard variety and a dialect, with each being able to perform a distinct sociolinguistic function. It is assumed that a high number of people worldwide speak a dialect as well as a standard variety. Diglossia may be compared to bilingualism, in as such that both variants may seem to be activated simultaneously.

Nevertheless, it is still a matter of debate how the processes of lexical selection in bilinguals and diglossics are executed (Costa et al., 1999; Green, 1998; Green & Abutalebi, 2008). Recent studies investigating dialect processing suggest that lexical selection mechanisms are comparable to the ones found in bilinguals (Vorwerg et al., 2019). So far, hardly any study has examined diglossic aphasia (Widmer Beierlein & Vorwerg, 2020). In Switzerland, a high percentage is diglossic, and Swiss German dialects (SG) are used for oral communication in almost all settings regardless of the social status of the speaker, and enjoy high prestige. High German (HG) in contrast is used for formal communication, reading and writing (Haas, 2004). The following study addresses diglossic word retrieval of low frequent nouns and verbs of persons with aphasia (PWA), measuring correctness and naming latencies in a picture naming task.

 

Methods

34 PWA (23 Anomic, 8 Broca, 5 Wernicke, 1 Global) and 34 healthy controls, all of them with SG as first language, underwent a picture naming test containing single nouns and verbs. Half of the pictures were named in SG and the other half in HG. Analysis was performed using linear mixed models. 

 

Results

While the control group showed lower naming latencies in SG, the aphasic group displayed a contrary pattern, with lower latencies in HG. However, an in-depth analysis showed that this effect was mainly caused by the anomic PWA, which had significantly lower latencies than the Broca and Wernicke group.

In terms of correctness, verbs were named significantly less correct than nouns, and the anomic group in particular named pictures significantly more correct than the other aphasic groups.

A variety effect was neither found for correctness nor for naming latencies.

 

Conclusions

The current study shows that while lexical retrieval in healthy diglossic speakers may be similar to language production in bilinguals with faster lexical retrieval in the first language, i. e. the dialect. This effect could not be replicated in the aphasic group, indicating that lexical retrieval and selection mechanisms involved in differentiating between varieties, may be impaired in aphasia.

The influence of multilevel factors on semantic-feature based naming outcomes in bilingual aphasia
PRESENTER: Michael Scimeca

ABSTRACT. Introduction

Recent work has investigated the effects of person- and treatment-related variables on word-retrieval outcomes following semantic-feature treatment (SFT) in monolingual aphasia (Quique et al., 2019). Yet, similar research has not been undertaken for bilingual aphasia. The current study examined 1) training outcomes from an SFT protocol for Spanish-English bilinguals with aphasia (BWA); 2) patterns of response generalization to untrained items and languages; and 3) the influence of treatment, participant, and item-level factors on treatment effects.   

 

Methods

Twenty-two Spanish-English BWA in the chronic phase of recovery received 10 weeks of SFT for word-retrieval impairment in Spanish or English. Adapted from Kiran et al. (2013), the intervention included 20 treatment sessions (2 hours per session twice per week) and was delivered via videoconference (Peñaloza et al., 2021). Treatment progress and response generalization to untrained items were assessed via 1) 3 pre-treatment naming probes, 2) 10 naming probes completed during treatment, and 3) 3 post-treatment probes. Naming probes consisted of 90 items: 15 each of trained items, semantically-related items, and unrelated control items as well as their corresponding translations in the untreated language (e.g. apple-orange-horse and manzana-naranja-caballo). Item-level psycholinguistic variables of interest included lexical frequency, phonological length (in phonemes), and phonological neighborhood density collected from the CLEARPOND (Marian et al., 2012) database. Additionally, naming severity scores were extracted from pre-treatment administration of the Boston Naming Test (BNT; Kaplan et al., 2001; Kohnert et al., 1998).

 Logistic mixed-effects-modeling using lme4 (Bates et al., 2015) in R examined group-level outcomes. Item-level naming accuracy was estimated in both the treated and untreated languages longitudinally; secondary analyses explored the effect of baseline naming severity and psycholinguistic factors on trained item response accuracy.

 

Results

In the treated language, there was a significant interaction effect between session and word set (b=.27, SE=.01, z=18.78, p<.001), indicating higher likelihood of a correct response for trained items relative to control items over the course of treatment. A similar, yet less robust, pattern of improvement emerged in the untreated language over time (b=.08, SE=.02, z=4.36, p<.001), suggesting some degree of cross-language generalization to the trained item translations. However, there was no significant improvement over time for semantically related items relative to controls in either the treated or untreated language.  

A second series of analyses assessing the treatment effect for trained items found a significant interaction effect for session and baseline naming severity (b=.18, SE=.06, z=2.88, p<.01), demonstrating that BWA with milder naming deficits improved more quickly in treatment. After accounting for baseline naming severity, separate psycholinguistic models for phonological length, phonological neighborhood density, and lexical frequency revealed that the effect of treatment on trained item accuracy was attenuated for longer words (b=-.04, SE=.01, z=-5.60, p<.001), strengthened for words with dense phonological neighborhoods (b=.12, SE=.03, z=4.05, p<.001), and unchanged by differences in word frequency (b=.03, SE=.02, z=1.20, p=.23).

 

Conclusion

These preliminary findings support previous work documenting the efficacy of semantic-feature based treatment for word-retrieval impairment in BWA and suggest that psycholinguistic and severity-related factors modulate treatment response at the group level.

Cross-linguistic treatment effects in bilingual individuals with aphasia
PRESENTER: Kana Lopez

ABSTRACT. Introduction

As bilingualism and multilingualism are becoming increasingly more common around the world, especially in places like California, the number of bilingual individuals with aphasia is increasing. Thus, understanding aphasia rehabilitation among bilinguals is a critical issue in the field of communication sciences. A topic of great interest in bilingual aphasia is which language(s) should be used in treatment. Although providing therapy in both languages may be beneficial, it may not be possible as bilingual speech-language pathologists and resources are limited. But do all bilinguals benefit from therapy in only one language? Exciting research has shown that providing treatment in one language may result in gains in both the treated and untreated language, but results are inconsistent. Generalization has been reported from the non-dominant language to the dominant language, but less often in the other direction (Edmonds & Kiran, 2006; Goral et al., 2012).

One aspect of a bilingual’s experience that is often ignored is how they learned their languages. Depending on how each language was learned, bilinguals may use the same region or different regions of the brain for each language (Yang & Li, 2012). The Declarative/Procedural model proposes that two languages learned naturalistically will show overlap for both lexicon and grammar, whereas learning an L2 in a formal setting will show overlap for the lexicon but separation for grammar (Ullman, 2005). The goal for this study is to determine how the manner of acquisition of the second language affects the cross-linguistic generalization of treatment for nouns and verbs, representing the lexicon and the grammar.

 

Methods

Two Spanish-English bilinguals with aphasia participated in a two-part naming intervention in English, their second language. Participant 1 learned English in an educational setting, starting at age 8, and Participant 2 learned English naturalistically when she moved to the U.S. at age 12. The participants received 2-4 weeks of treatment for nouns using Semantic Feature Analysis (SFA, Boyle, 2004) and 4 weeks of treatment for verbs using VNeST (Edmonds, Nadeau, & Kiran, 2009). The treatment phase included four 2-hour sessions per week. Treated and untreated items were assessed weekly in English and Spanish. Retrieval accuracy was assessed before treatment, between the two treatment phases, and after treatment.

 

Results

Greater cross-language generalization was observed for nouns in P1 compared to P2, while P2 showed greater within-language generalization compared to P1. Following verb treatment, P2 showed greater treatment response as well as more cross-language generalization compared to P1.

 

Conclusions

The hypotheses were partially supported. P2, who learned their L2 implicitly, demonstrated greater cross-linguistic generalization following verb treatment. However, contrary to the hypotheses, P1, the participant who learned L2 explicitly, demonstrated greater cross-linguistic generalization effects for nouns compared to P1, the participant who learned L2 implicitly. While these results are preliminary due to the small number of participants, the findings suggest that treating nouns in L2 may be beneficial for bilingual individuals with aphasia who learned their L2 explicitly. On the other hand, verb treatment in L2 may not be as beneficial for these individuals, particularly when considering cross-linguistic generalization. For individuals who learned their L2 implicitly, when deciding on a language for noun treatment, it may be more important to consider other factors such as the language environment, given that cross-linguistic generalization may not be observed.

Determining Primary Progressive Aphasia Variant with Longer Reading Versus Repetition Tasks
PRESENTER: Kristina Ruch

ABSTRACT. Introduction

Repetition and reading tasks are commonly used to evaluate primary progressive aphasia (PPA) (Lukic et al., 2019).  We hypothesized that (1) the ratio of reading to repetition errors can distinguish PPA variants, and (2) due to floor and ceiling effects, ratios of errors with short sentences with common words distinguish some individuals, while ratios of errors with lengthier sentences with longer and less common words better distinguish others. 

 

Method

We studied 210 participants (84 lvPPA, 66 svPPA, and 60 nfavPPA) on at least one sentence reading and repetition task: (1) simple sentence reading and repetition tasks (5-10 words each) from the National Alzheimer’s Coordinating Center’s (NACC) FTLD Neuropsychological Battery (www.naccdata.org) and (2) a new task with longer sentences (10-16 words each) including longer words with lower frequency (e.g. Japanese, intimidated). We calculated the ratio of errors (omitted, substituted or misarticulated words) in reading:repetition tasks using the simple and new sentences. We used multinomial regression to determine the ratios of reading errors:repetition errors that discriminated between variants. We used t-tests to compare reading to repetition scores for each tasks for each variant.

 

Results

A total of 146 individuals completed the two (reading and repetition) simple sentence tasks and 19 completed the two new sentence tasks (15 completed both simple and new). There were no significant differences between PPA variants in age or education (by ANOVA), or sex (by chi squared) or between those who completed the simple, new, or both tasks.  Using multinomial regression, the simple plus new ratio of reading:repetition errors explained more of the variance between PPA variants (pseudo R2 = 0.32; p=0.03; n=15), than either the simple sentence ratio (pseudo R2= 0.03, p=0.02, n=146) or the new ratio (R2=0.22; p=0.01; n=19). Both svPPA and lvPPA patients made significantly more total errors on the simple repetition task than the simple reading task, but the mean difference was greater for lvPPA (p<0.00001 vs. p=0.01). Only the lvPPA patients made significantly more total errors on the new repetition than the new reading task (p<0.00001, vs. p>0.1 for the other variants).

 

 

Conclusion

The ratio of reading:repetition errors in the new sentence task better discriminates lvPPA from the other variants than the same ratio in the simple sentence task (from the NACC battery), but they provide complementary information. Either pair of tasks discriminates between lvPPA and nfavPPA, which is generally the hardest diagnostic distinction to make (Tippett, 2020). However, the combination of short and long sentences improves classification and distinguishes lvPPA from svPPA. SvPPA participants actually made more errors on reading than repetition in the new task. Further investigation may determine if word frequency, word length, or sentence length effects account for the differences. Larger numbers of participants who complete both pairs of tasks are needed to confirm our findings.

Subtype classification in primary progressive aphasia using operationalized criteria
PRESENTER: Anja Staiger

ABSTRACT. Introduction

According to current diagnostic criteria, primary progressive aphasia (PPA) is classified into three main variants: a nonfluent-agrammatic (nfvPPA), a semantic (svPPA) and a logopenic variant (lvPPA) [1]. As previous studies have shown, not all patients can be clearly assigned to one of these subtypes (10-41% unclassifiable; [2]). The PPA main types are defined by distinct patterns of impairment across different speech and language domains. However, the classification scheme does not provide clear guidance on when task performance is considered impaired. In the meantime, a few studies have proposed strictly operationalized criteria for classifying the variants [e.g., 3-5]). To our knowledge, a comparable approach for German-speaking patients with PPA does not yet exist. The aim of the present study was to determine how well the classification system can be applied to German-speaking patients with PPA using clearly defined criteria and norm data from established speech- and language batteries.

 

Methods

So far, 35 native German-speaking patients (15 female) who met the core criteria for PPA [1], were included in the study. The sample will be expanded to encompass 40 patients. Assessment of speech- and language functions included (a) the Aachen Aphasia Test [AAT] subtests comprehension of single-words and sentences, confrontation naming, sentence repetition, and written language [6], (b) ratings of agrammatism, word-retrieval and phonological errors in spontaneous speech production according to AAT guidelines, (c) subtests for comprehension of complex syntax from the German version of the Test for Reception of Grammar (TROG-D; [7]), semantic sorting subtests of the Nonverbal Semantics Test (NVST; [8]), and (d) consensus ratings of motor speech performance using a word repetition test for apraxia of speech (Hierarchical Word Lists - compact version; [9]) and the Bogenhausen Dysarthria Scales (BoDyS; [10]). Definitions of impaired task performance were established for all variables, using published norms where available. For all but six participants, 3T-MRI data was available.

 

 

Results

Reliability analyses are pending. According to preliminary analyses, 27 participants (77.1%) could be clearly assigned to one of the main PPA variants (22.9% PPA unclassifiable). 10 patients each met the clinical criteria for nfvPPA and svPPA (28.6% each). 7 patients (20.0%) could be classified as lvPPA.

 

Conclusions

The tests used and the criteria defined for performance impairment allowed for a PPA classification in the majority of cases. With 22.9%, the proportion of unclassifiable cases was within the range of previously published studies. This suggests the general feasibility of the approach. To validate the clinical classification, structural imaging data will be used in a further analysis step.

Retraining syntactic structures via script training in progressive aphasia: evidence for implicit learning in agrammatism
PRESENTER: Lisa Wauters

ABSTRACT. Introduction

Script training is an effective treatment approach for individuals with stroke-induced and progressive aphasia (Hubbard et al., 2020). Studies have documented the benefits of script training for functional communication (e.g., Goldberg et al., 2012), but few have examined whether script training can remediate underlying linguistic deficits.

Script training typically utilizes the repeated recitation of sentences, which may provide opportunities for structural priming (i.e., priming for syntactic forms). Several studies have shown structural priming effects in individuals with agrammatism (e.g., Cho-Reyes et al., 2016). Implicit processes are considered to drive these effects and support grammatical learning (Chang et al., 2000). Thus, the cumulative priming effects associated with repeated script practice may facilitate lasting improvement in the production of primed grammatical structures.

This study examined the effects of script training with embedded syntactic targets on the ability of participants with progressive agrammatic aphasia to accurately produce complex syntactic structures in constrained tasks and spontaneous speech.

Methods

Three participants with progressive agrammatic aphasia participated: two with nonfluent/agrammatic primary progressive aphasia (Gorno-Tempini et al., 2011) and one with behavioral variant frontotemporal dementia with agrammatism.

Six personally-relevant scripts regarding functional topics were developed. One or two target syntactic structures (i.e., subject relative clauses, passive structures, present progressive auxiliaries, and object relative clauses) were selected for each participant based on standardized grammar assessments and analyses of connected speech.

Participants underwent Video-Implemented Script Training for Aphasia (VISTA; Henry et al., 2018) for six weeks. Twice weekly treatment sessions targeted memorization and conversational usage of scripts, complemented by 30 minutes of daily unison script production practice with a video model. Four scripts were trained, and two remained untrained. No explicit training of syntactic structures was provided. 

 Multiple-baseline data were collected to track performance on scripts. Twenty-six syntax production probes (adapted from Thompson et al., 2012a,b) were administered at pre- and post-treatment for each target structure. Three spontaneous speech samples were collected at each time point. Samples were transcribed and the frequency of occurrence for each target structure was calculated. 

 

Results

Production of correct, intelligible scripted words for each trained topic improved upon initiation of treatment. All participants reached criterion performance of 90% for all trained scripts. Performance on structured syntax probes improved significantly from baseline for one of two structures for each nfvPPA participant. Production of target structures in spontaneous speech increased for all but one target structure.

 

Conclusions

We observed increased production of targeted syntactic forms following VISTA with embedded syntactic structures, indicating that script training facilitated generalized improvement in the production of syntax in the absence of explicit training.

These findings support the notion that implicit modes of training may benefit syntactic production in agrammatic progressive aphasia, consistent with evidence of implicit learning (Schuchard & Thompson, 2014) and positive effects of implicit priming in treatment (Lee & Man, 2017) observed in stroke-induced agrammatic aphasia.

Future studies should investigate whether these findings extend to a larger group of individuals with agrammatic aphasia and examine implicit learning for a variety of syntactic structures.

Structural Correlates of Language Processing in Primary Progressive Aphasia
PRESENTER: Curtiss Chapman

ABSTRACT. Introduction

Studies exploring the relationship between brain structure and language function in primary progressive aphasia (PPA) provide important information about pathomechanisms of PPA and about functions of the healthy brain. However, existing studies mostly rely on small samples, are limited by the inclusion of only one PPA variant, and do not probe multiple aspects of language processing (e.g., Migliaccio et al., 2016), all of which limit their power to detect structure-behavior relationships and to identify commonly damaged areas across PPA groups that contribute to task performance. To address these issues, we explored structure-function relationships in a large cohort of PPA patients across multiple aspects of language processing.

Methods

We analyzed data from 61 controls and 118 PPA patients, including semantic (svPPA), logopenic (lvPPA), and nonfluent-agrammatic (nfvPPA) variants. We used multiple regression across PPA subtypes to analyze the relationship between either voxel-based morphometry (VBM) or cortical thickness measures and several language tests: picture naming (Boston Naming Test), auditory word-picture matching and repetition (Point and Repeat task), category and phonemic fluency, and the reading and writing subtest of the Aachen Aphasia Test. Our analyses controlled for age, gender, scanner, and, for VBM analyses, total intracranial volume. We also examined which brain areas correlated with task performance overlapped with cortical atrophy in each PPA group.

Results

All PPA groups were impaired compared to controls on all language tasks (p<.001). Each task showed multiple associations with atrophy across patient groups (p<.0001, uncorrected). Picture naming and word-picture matching were associated with atrophy to bilateral temporal cortex and left frontal cortex, and subsets of these regions were also associated with category fluency, consistent with a semantic role for these regions. PPA patients shared atrophy within clusters of these task-associated regions in both temporal lobes, suggesting a common contribution to naming problems. In contrast to naming and word-picture matching, phonemic fluency, repetition, and reading and writing were associated with atrophy to left medial frontal or a sparser left fronto-parietal network, indicating potential roles of phonological, working memory and attentional deficits. Task-associated regions often overlapped with patient atrophy, though some did not, indicating that not-yet-atrophied regions also play a critical role in task performance. Regions associated with picture naming, word-picture matching and category fluency overlapped with atrophy across all patients, though especially lvPPA and svPPA; other task-associated regions only overlapped with lvPPA and/or nfvPPA atrophy (phonemic fluency, repetition) or did not overlap (reading and writing).

Conclusions

Our results provide converging evidence on how gray matter atrophy in the language network contributes to deficits of PPA patients. We observe strong evidence concerning brain regions associated with language processing in common clinical tests that is consistent with previous research (Mesulam et al., 2018; Rogalski et al., 2011). Furthermore, we show that overlapping damage across PPA variants is likely to create similarities in group performance in many single-word language tasks. Likewise, although all PPA variants were impaired on all language tasks, their atrophy did not always overlap with task-associated regions, indicating a role for non-atrophied cortex in task performance.

Brain areas that mediate sentence comprehension in primary progressive aphasia: Evidence from perfusion imaging
PRESENTER: Olivia Herrmann

ABSTRACT. Introduction: Beyond the core linguistic deficits, individuals with aphasia exhibit concomitant deficits in executive functions, in particular working memory (WM) (e.g., Caplan, Michaud, & Hufford, 2013; Murray, 2012), which is responsible for active mental manipulation. Syntactic processing in sentence comprehension requires such a storage and computational system, and deficits in WM have been shown to predict sentence comprehension in post-stroke aphasia (Pettigrew & Hillis, 2014; Varkanitsa & Caplan, 2018). Little research has been done in primary progressive aphasia (PPA) despite the existence of sentence comprehension deficits in all variants. In this study, we asked whether performance on a prevalent WM task (Digit Span backward; DSB) predicts performance on sentence comprehension (SOAP Test) (Love & Oster, 2002) and which brain areas mediate such effects, particularly the left middle frontal gyrus (MFG), an important area for WM or the left inferior frontal gyrus (IFG), a typical language area.

 

Methods: Thirty-six participants with PPA (mean age 67.05 ± 5.86 years) underwent comprehensive baseline cognitive-linguistic evaluations followed by an MRI, specifically a pseudo-continuous arterial spin labeling (pCASL) sequence. All MRI scans were performed on a 3T MRI scanner with 8-channel head coil (Philips Healthcare, Best, Netherlands). pCASL sequence scan parameters were: field of view = 205 × 205 mm2, matrix = 64 × 64, 39 axial slices, thickness = 3.2 mm, TR/TE = 5817/9.3 msec, labeling duration = 1.8 seconds, post-labeling delay = 1.8 seconds, 6 pairs of label and control images, 3D gradient-and-spin-echo with background suppression, duration 5 minutes 14 seconds. Cerebral blood flow (CBF) maps were generated from the pCASL MRI images using JHU’s cloud-based ASL analysis software, ASL-MRICloud (Li et al., 2019). Relative CBF (i.e., perfusion) was calculated by dividing CBF within a brain region by total CBF over the entire brain. Simple linear regression showed predictive values for behavioral measures (DSB and SOAP scores) and perfusion, and multiple linear regression revealed mediation effects (Shrout & Bolger, 2002).

 

Results: Performance on DSB significantly predicted performance on sentence comprehension, SOAP scores. DSB was also significantly associated with perfusion of the left MFG and left IFG opercularis (IFGoperc). Importantly, DSB was not associated with perfusion of the IFG triangularis (IFGtri). When DSB and the left MFG were used in a multiple linear regression for SOAP, DSB had less predictive power of SOAP scores than the simple linear regression of DSB on SOAP, indicating a partial mediation effect of the left MFG. Similarly, when DSB and the left IFGoperc were used in a multiple linear regression for SOAP, DSB again had less predictive power of SOAP scores than its corresponding simple linear regression, indicating another partial mediation effect of the left IFGoperc.

 

Conclusions: The present study indicates that the left MFG and left IFGoperc partially mediate WM associations with sentence comprehension as previously shown in Turken & Dronkers, 2011 and Zaccarella & Friederici, 2015. These findings highlight that performance on a basic WM test may predict sentence comprehension, mostly to the extent that MFG and IFG operc (but not IFGtri) are involved. We present a simple mediation model that future studies could investigate unique contributions of other brain areas or differences between variants as mediators of correlations between cognitive functions to language.

Brain Perfusion and Neurocognitive Tasks in Patients with Primary Progressive Aphasia

ABSTRACT. Introduction

Cerebral blood flow (CBF) or brain perfusion is considered a marker of local neuronal activity and has been associated with performance in language and domain-general functions (Leeuwis et al. 2017, 2018). Neurodegeneration in patients with Primary Progressive Aphasia decreases blood flow in damaged tissue affecting both language and domain-general performance (Gorno-Tempini et al. 2011). This study aims to model language and domain-general performance based on the perfusion of frontal, parietal, and temporal brain areas known to be involved in language and domain-general functions.

 

Methods

To investigate CBF in frontal, parietal, and temporal brain areas, we conducted perfusion analysis in 38 patients with Primary Progressive Aphasia (13 with the logopenic PPA variant, 19 with the non-fluent PPA variant, 6 with the semantic PPA variant). MRI scans were performed on a 3-Tesla MRI scanner using an 8-channel head coil (Philips Healthcare, Best, Netherlands). We generated pCASL MRI images using ASL-MRICloud, the Johns Hopkins University’s cloud based on Arterial Spin Labeling analysis software (Li et al. 2019) and calculated the Relative CBF by dividing the CBF within a brain region with the value of CBF over the entire brain.

 

Results

Domain-general areas of the left hemisphere, mostly belonging to the multiple demands network (e.g., MFG, MFG_DLPFC, AG) correlated with both language and domain-general functions (attention, working memory, learning, in tasks such as Trail A and B, Digit span forward and backwards, word fluency and RAVLT). In addition, these areas correlated with language tasks such as naming, sentence repetition, sentence comprehension (SOAP) and spelling. Conversely, core language areas such as the left IFG triangularis, IFG opercularis, SMG, STG, correlate only with language tasks such as naming, sentence comprehension, word fluency and spelling.

 

Conclusions

These results show that areas in frontal, parietal, and temporal cortex representing the multiple demands network (REF) are associated with both domain-general and language functions but core language areas are associated mostly with language functions. Understanding these contributions, can determine the multifunctional role of brain networks in neuro-cognitive functions and lead to improved intervention strategies in PPA and other related neurodegenerative communication disorders.

Morpho-syntactic processing in Primary Progressive Aphasia and stroke-induced aphasia: comparison of ERP response patterns

ABSTRACT. Introduction

People with the agrammatic variant of Primary Progressive Aphasia (PPA-G) and people with stroke-induced agrammatic aphasia (StrAph) both present with morpho-syntactic impairments and non-fluent speech with grammatical deficits in the presence of spared semantic processing [1]. However, in PPA-G, grammatical deficits gradually emerge over time due to neurodegenerative disease [2], while in StrAph, deficits occur suddenly due to cerebrovascular lesion. Only a few studies have directly compared language deficits in StrAph and PPA, and none have used on-line paradigms, although these may be more sensitive to detect language deficits [3]. In the present study, we compared on-line processing of subject-verb agreement violations in PPA-G and StrAph using ERP.

 

Methods

Sixteen healthy adults (age: 35-78 years) and two groups of people with aphasia: StrAph (n=7), ages 26-72 years; PPA-G (n=10), ages 52-76 years, completed a sentence acceptability judgment task while EEG was recorded from 32 scalp electrodes. Both patient groups presented with language impairments consistent with agrammatism. However, the StrAph group, compared to the PPA-G group, presented with more severe language deficits overall, were less fluent, and were more impaired on offline measures of sentence processing.

This study included (a) morpho-syntactic and (b) semantic conditions. For each, half of the sentences (n=50) contained a violation. Data from each group were analyzed separately for both conditions using mixed-effects regression. For each regression model, the dependent variable was the mean amplitude of the EEG signal in pre-selected time windows, with sentence type (correct, violation) and electrode region(posterior left/right/midline, anterior left/right/midline) as fixed effects and participant as a random effect.

 

Results

Morpho-syntactic violations elicited a significant, posteriorly-centered P600 in the group of healthy adults. Compared to the healthy controls, the StrAph group showed a delayed P600 with an anterior shift, while the PPA-G group showed no response to morpho-syntactic violations. Semantic violations elicited a significant, centro-parietally distributed N400 in all three participant groups.

 

Conclusions

Results indicate that the healthy participants undertake processes of re-analysis/repair after detecting violations of subject-verb agreement. In PPA-G, participants fail to detect such violations. Meanwhile, in StrAph, violations are detected, but re-analysis processes are delayed.  In addition, the anterior shift of the scalp distribution in StrAph is in line with a previous study on older adults showing a more anterior distribution of the P600 in response to agreement violations in written sentences [4]. While the scalp distribution of ERP responses does not necessarily reflect activity of regions in the same area, this difference may reflect increased reliance on domain-general resources [5] supporting re-analysis processes. This suggests recruitment of more domain-general cognitive resources may be hindered in people with PPA-G due to the more widespread cognitive decline in this group (see also [6]).

Results from the semantic condition suggest semantic processing is preserved in both patient groups, in line with previous studies [7, 8]. Notably, no anterior shift of the N400 was noted in the StrAph group, suggesting that the abnormal P600 topography in this group does not simply reflect lesion-related shifts.

Session 12 (permanent): Poster session

Monday, 2pm-3.30pm: Morphology; Syntax; Discourse

Gender Agreement Processing in Transcortical Sensory Aphasia

ABSTRACT. Introduction

Little neuropsychological research has been carried out about the availability of the Gender

feature for the syntactic component. This work aims at moving either towards or away from finding a dissociation of a syntactic module exclusively geared towards making sure that elements of a sentence “match” by carrying coherent linguistic feature values.

When mediated by both lexical and phonological decoding, oral repetition in Transcortical Sensory Aphasia (TSA) has been shown to display linguistically informed altering of purposefully grammatically incorrect repetition stimuli (Whitaker, 1976; Davis et al., 1978), with a tendency to correct grammatical errors during repetition in spite of the absence of semantic comprehension. The present TSA single-case study, conducted in Italian, investigated the processing of linguistic Gender agreement errors through a series of oral repetition tasks, with the purpose of: (i) investigating whether morphosyntactic and semantic abilities can be independently spared; and (ii) investigating whether Gender agreement errors are among the linguistic facts that the patient retains sensitivity towards, and if so, how.

 

Methods

The patient (TST) was a 78 y.o. female native speaker of Italian diagnosed as TSA, who sustained an ischemic lesion in the white matter in the left temporo-parietal and insular area. TST was administered 8 oral repetition tasks, each containing Gender agreement errors that occurred in either phrase condition (i.e. ‘definite article + noun’) or sentence condition (i.e. ‘subject + nominal predicate’). These different conditions were formulated with the purpose of appreciating possible differences in the processing of the Gender feature in two different syntactic environments. A number of additional variables was introduced: singular/plural; feminine/masculine; Gender morphological (un)informativeness; common noun/proper name status; animate/inanimate noun referents.

 

Results

During repetition, the changes applied by the patient were almost exclusively corrective (82/96) and mostly followed a left-to-right strategy (75/82), meaning the Gender of words that preceded determined the Gender of words that followed in the phrase or sentence. Corrective changes were evaluated according to a number of dimensions: singular/plural*, feminine/masculine*, morphological informativeness of nouns^, common noun/proper name status°, phrase/sentence agreement°, animate/inanimate nouns°. The effect of most interest was that Gender of animate nouns elicited corrective changes, while Gender of inanimate nouns was utterly ignored.

*no significant difference; ^significant difference only with reference to proper names; °significant difference

 

Conclusions

Findings indicate that: (i) in TSA, morphosyntactic and semantic abilities can be independently spared; (ii) in Italian, Gender is morphologically realized and could be accessed for the purposes of agreement only in the case of animate nouns. This suggests that, Gender-wise, inflection on animate nouns might be qualitatively different from that of inanimate nouns, whose inflectional suffixes and related targets (i.e. definite articles referring to them) were never modified by the patient: while the morphology of animate nouns carries a Gender value, the morphology of inanimate nouns might be an expression of morphological Class only. In this study, the morphological realization of Gender sat on determiners, adjectives, and on animate nouns; inanimate nouns did not seem to bear the morphological realization of Gender on themselves.

Online comprehension of verbal time reference in primary progressive aphasia: Evidence from eyetracking
PRESENTER: Haiyan Wang

ABSTRACT. Introduction

Primary progressive aphasia (PPA) is a degenerative disease affecting language while leaving other cognitive facilities relatively unscathed (Mesulam, Wieneke et al., 2012). The three major variants of the disease affect language in different ways. The agrammatic variant is associated with grammatical impairments; the logopenic variant with deficient word retrieval; and the semantic variant with impaired lexical-semantic representations. Here we investigate verbal time reference in PPA. Verbal time reference specifies the information about when an event happens/happened. For example, drinks and is drinking indicate events in the present, but drank and has drunk indicate events that happened in the past. Prior evidence from many languages suggest that reference to past events is more tightly linked to complex grammar than reference to present events, hence past reference is more difficult to comprehend and more vulnerable to impairment in people with agrammatic aphasia resulting from stroke (Bastiaanse et al., 2011). The present study examined verbal time reference in patients with PPA, with the expectation that those with the agrammatic variant would evince greater difficulty with past than present time reference, but that logopenic or semantic variants would not show this pattern due to the relative sparing of complex syntax in these PPA variants.

Methods

Participants completed a visual world eye-tracking task of sentence comprehension, which was analyzed for accuracy and eye movement patterns. The task comprised 20 action photos (e.g., drink), each in a past reference form (“drank” or “has drunk”) or a present reference form (“drinks” or “is drinking”). Participants listened to a sentence as they viewed an array of two action photos – one with the action ongoing (present) and one with the action completed (past) and pointed to the matching picture.

Results

Results from the eye-tracking data indicate that all PPA groups fixated on the correct picture less than the healthy controls for past time reference. This pattern also was found for present time reference in the logopenic and semantic, but not the agrammatic, groups. The agrammatic group also showed delayed looks to the correct picture relative to healthy controls, but only for past time reference. These results are consistent with prior findings for agrammatic participants, and consistent with a grammatical deficit that impacts comprehension of past time reference. The results from the logopenic and semantic subgroups suggest a lexical deficit that affects verb comprehension, but not specifically comprehension of past time reference. 

Conclusions

These data add to the growing body of knowledge concerning the nature of the language deficits across the three variants of PPA. Our study implicates that grammatical impairment in past time reference is an important feature for language assessment of PPA.

Different Behaviors of the Adjective DE and Possessive DE in the Production by Chinese Adults with Post-stroke Aphasia
PRESENTER: Shengnan Ma

ABSTRACT.  

Introduction

It has been reported in the literature that people with aphasia have impaired production of inflectional morphemes (Goodglass & Berko, 1960; Goodglass & Hunt, 1958; Kean, 1977), and almost all studies in the literature focus on morphemes in European languages, such as the English plural marking -s and the possessive marking s (cf. Goodglass & Hunt, 1958; Stemberger, 1984; Szupica-Pyrzanowska, Obler, & Martohardjono, 2017, Thompson, Fix, & Gitelman, 2002; Stockbridge, et al. 2021). However, to the best of our knowledge, no study has been done on the production of morphemes by Chinese patients with post-stroke aphasia. This paper is going to fill in the gap by investigating how the adjective DE and the possessive DE behave in the picture-naming production by adult Chinese patients with post-stroke aphasia. The adjective DE and the possessive DE are phonologically and orthographically identical in Mandarin Chinese (hereafter Chinese), but they occupy different positions in Chinese syntactic structures. The adjective DE is a morpheme,  attaching the adjective to the noun being modified (i.e. in the form of Adj.+DE+Noun). The Adj.+DE is regarded as an adjunct attached to the NP (Huang, 1982; Chiu,1993; Ning,1993, 1995; Wen,1996). However, the possessive DE heads a DP with the possessor in the Spec of DP and the possessee in its complement (Cheng, 1999; Xiong, 2005). It also has a function of checking the possessive case. Given the contrast between the adjective DE and the possessive DE with regard to their syntactic structures and functions, it was hypothesized that Chinese aphasics would perform better in producing the adjective DE than the possessive DE as the latter involves more complex computations, and its production can be more taxing to post-stroke aphasics.  

 

Methods

We collected data from 20 participants aged between 41 and 79 (Mean=65.2,SD=12.1) and they are all from northern China, namely southeastern Shandong Province and Northwestern Anhui Province, with their dialects belonging to Zhongyuan Mandarin, which is close to standard Mandarin. The author’s dialect is also Zhongyuan Mandarin, thus the participants’ production can be guaranteed to be understood. Among these participants, 13 are men whereas 7 are women. 6 people have received primary school education, 6 middle school education, 5 high school education and 2 junior college education. All of them have been attacked by stroke for more than one year, having passed the acute phase and being of moderate illness degree now. They all had production for both conditions.

 

Results

Participants’ production of the adjective DE (Mean=78.75, SD=27.24) is better than that of the possessive DE (Mean=53.75, SD=35.76) (p=0.02<0.05). Thereby, there is a significant difference between the production of the two conditions. To be specific, 17 people got higher accuracy rate in producing the adjective DE although three people (No.4, No. 6 and No.9) demonstrated no difference as for the two conditions. For the former 17 participants, all presented better production of the adjective DE than the possessive DE; 3 out of these 17 people (No.3, No.8 and No.18) even produced no possessive DE even if one person’s accuracy rate of the adjective DE is much lower than the other two’s, at only 20%. As to the 3 participants having no difference in the production of the adjective DE and the possessive DE, two of them achieved full scores for both conditions while one performed much worse than the former two people, only achieving 10% accuracy rate for each condition.

 

Conclusion

It can be drawn that Chinese-speaking post-stroke aphasics produced the adjective DE more accurately than the possessive DE, and significant difference can be found between them, which proves that our hypothesis is right. The possessive DE involves more complex syntactic structure compared with that of the adjective DE, thus requiring more efforts to produce.

 

 

Training and generalization effects of verb tense training using irregular verbs in agrammatic aphasia

ABSTRACT. Introduction

Errors in the production of verb tense are a hallmark feature of agrammatic language production in aphasia (Bastiaanse et al., 2011; Friedmann & Grodzinsky, 1997). In languages that have both regular and irregular conjugations for verb tense, both forms are equally impaired (Faroqi-Shah, 2007). Intervention studies for verb tense, although limited, have shown that people with agrammatic aphasia are responsive to verb tense training (Dashti et al., 2018; Faroqi-Shah et al., 2008, 2013; Links et al., 2010). Moreover, tense training with irregular verbs as stimuli (e.g., sing-sang, drink-drank) generalizes to improved production of regular past tense (e.g., walk-walked, push-pushed) without explicit training of regular past tense formation rule. However, generalization to untrained irregulars is limited. Specifically, in two studies that conducted morphosemantic training of verb tense using irregular verbs as stimuli, production accuracy of untrained regular past improved by 84%, while the production accuracy of untrained irregular past verbs improved only by 55% (N=4; Faroqi-Shah, 2008, 2013).

Irregular verbs in English fall under different paradigms, which include: vowel-change (e.g., sing-sang), vowel-change + consonant addition (e.g., bring-brought), and no-change (e.g., hit-hit). Given that the prior studies to conduct irregular past tense training used verbs from a single paradigm (Faroqi-Shah, 2008; 2013), it remains to be investigated if using verbs from different irregular paradigms would enable better generalization to untrained irregular past verbs.  This study sought to examine: 1) training and generalization outcomes for irregular past tense production when training stimuli consist of irregular verbs from different paradigms; 2) generalization to untrained tenses (present and future tense) following past tense training.

 

Methods

Six individuals (4M, 2F) with agrammatic language production and verb tense deficit following a single left hemisphere stroke participated in the study (Age M(SD) = 51.5(23.2) years, time post-stroke M(SD)=2.3 (1.3) years). Training followed morphosemantic treatment procedures (Faroqi-Shah, 2008) and utilized a single-subject design with three baseline sessions and 18 hours of intervention. The training stimuli included 20 irregular verbs, 10 each of vowel-change and vowel-change + consonant addition verbs. The outcome variable was accuracy of verb tense production in sentences for a picture description task for: trained irregular, untrained irregular (N=20), and untrained regular (N=20) verbs across future, past, and present tenses.

 

Results

The pre- and post- treatment outcomes show a significant improvement in the production of trained irregular past (68.6%), untrained irregular past (43%), and untrained regular past (63.3%) tenses (Wilcoxon signed-ranks test, one-tailed p < .05). There was no improvement in present and future tense accuracy (29%, p > .05).

 

Conclusions

Using a mix of irregular verbs from different paradigms did not enhance generalization outcomes for untrained irregular past tense (55% in past studies vs 43% in the current study) supporting the nonproductivity of irregular verb paradigms (Leminen et al., 2019). Training past tense did not generalize to present and future tenses. Thus, this study highlights the importance of specifically targeting two components of tense training in agrammatic aphasia as these do not automatically generalize: individual irregular verbs and each tense.

Distinct aspects of phrasal production are associated with distinct lesion correlates in chronic post-stroke aphasia
PRESENTER: William Matchin

ABSTRACT. Introduction

We used the Morphosyntactic Generation Task (MorGen) (Stockbridge, Matchin, et al., 2021; Stockbridge, Walker, et al., 2021), to assess the lesion correlates of different aspects of phrasal production in people with chronic aphasia. The MorGen is designed to elicit two-word noun phrases involving different modifiers: numeral quantifiers (one vs. two), color adjectives (red vs. blue), size adjectives (big vs. small) and inflectional morphology (plural -s vs. null inflection, possessive -s). Here we report lesion-symptom mapping analyses in chronic post-stroke aphasia, in order to ascertain whether impaired production of features are associated with distinct lesion correlates. Prior work in progressive aphasia found that size features were not a strong basis of impairment regardless of variant (Stockbridge, Matchin, et al., 2021), so only plural and possessive inflectional marking, color, and number were examined.

 

Methods

Twenty-six people with chronic post-stroke aphasia were assessed on the MorGen. The MorGen presents two simultaneous images in each trial, which contrast based on one feature (number, color, size, possession). Subjects are asked to describe the target image using two words. We assessed inflectional morphology by averaging across performance on plural and possessive -s and assessed color and number separately.

Subjects’ lesions were manually drawn on their MRI scans and subsequently warped to MNI space (Fridriksson et al., 2018). We tested three regions of interest (ROIs): posterior temporal lobe (JHU atlas, posterior STG and MTG), Broca’s area (JHU atlas, pars opercularis and pars triangularis), and anterior arcuate fasciculus (Catani atlas, overlap with posterior temporal lobe ROI removed). We calculated the percent damage to each ROI for each subject and then performed regression analyses in NiiStat (https://www.nitrc.org/projects/niistat/) to assess the relationship between performance on each measure and damage to each ROI.

 

Results

Performance on the three measures of interest dissociated from each other. Performance deficits were associated with the following lesion correlates (p < 0.05). Color adjectives: damage to arcuate fasciculus (Z = -2.33) and posterior temporal lobe (Z = -2.30). Numeral quantifiers: damage to Broca’s area (Z = -2.00). Inflectional morphology: damage to arcuate fasciculus (Z = -2.81) and Broca’s area (Z = -1.77).

 

Conclusions

Color adjective deficits involved posterior regions, similar to those described previously for deficits on picture naming tasks with noun targets (Baldo et al., 2013; DeLeon et al., 2007; Fridriksson et al., 2018). Deficits in inflectional morphology were primarily associated with damage to arcuate fasciculus and, to a lesser extent, Broca’s area. This is consistent with the need to coordinate posterior temporal and inferior frontal regions with each other to select the correct inflectional form given the structural context (Matchin & Hickok, 2020). Finally, deficits on numeral quantifiers primarily implicated damage to Broca’s area, which is consistent with a role for this region in retrieving functional elements. In sum, production of different morphemes requires overlapping but distinct brain systems.

Verbal Inflection Processing in Spanish Speakers with Aphasia: Time and Agreement
PRESENTER: Camila Stecher

ABSTRACT. Introduction

There is substantial cross-linguistic evidence that morphosyntax of verbs is impaired in agrammatic aphasia and that not all verb inflections are affected equally, with tense morphology being especially vulnerable (Benedet et al., 1998; Caramazza & Hillis, 1991, Friedmann & Grodzinsky, 1997). Two explanations for this deficit focus on the processing costs that the grammatical operations impose on People with Aphasia (PWA). The PAst DIscourse LInking Hypothesis (PADILIH. Bastiaanse et al, 2011) posits that reference to the past is particularly harder to produce because it requires discourse linking, which is affected in aphasia. Faroqi-Shah & Thompson (2007), on the other hand, suggest that there is a special difficulty in the retrieval of the accurate verb forms which affects temporal reference in general, in both regularly and irregularly tensed verbs, while sparing non-finite forms.

Our work aims to study these difficulties in agrammatic aphasia to obtain evidence on its processing in Spanish, which uses two different morphemes to express time and agreement in verbal inflection. We want to see specifically the performance pattern in Argentinian Spanish, in which inflected, simple tenses are preferred over their complex counterparts, in opposition to peninsular Spanish. We intend to compare if it coincides with what has been accounted for other languages in the literature and discuss and review if the current hypotheses can account for it adequately.

 

Methods

Participants

Two PWA with agrammatism and two groups of healthy people (HPG) matched in age and education level participated in the study. All native Argentinian Spanish speakers.

Procedure and materials

We developed a brand-new assessment battery designed to tap the production and comprehension of verbal inflection of tense and agreement on isolated verbs and sentences. It’s composed of five tasks:

  1. Sentence completion task.
  2. Sentence elicitation task (SC).
  3. Picture-guided elicitation task.
  4. Grammaticality judgment task (GJ).
  5. Sentence-picture matching task.

For the stimuli we manipulated both time reference (past, present and future simple tenses and adverbs) and the person and number of the subject (first person singular and third person singular and plural). We used only transitive verbs with a maximum length of 3 syllables. All sentences had an Adverb-Subject-Verb-Object structure (e.g., “Ayer María pesó las naranjas” / “Yesterday María weighted the oranges).

 

Results

The preliminary results show that HPG performed at ceiling level in these first two tasks, while PWA encountered great difficulties. Both had greater problems with stimuli that evaluated Tense, accounting for 29/38 and 9/10 wrong answers respectively in the SC task.

 

Conclusions

The results show a clear deficit in the processing of verbal inflection in the PWA compared to the HP. While evidence is still scarce to arrive to conclusions on the explanatory adequacy of the presented hypotheses, it doesn’t appear to be a specific problem with reference to the past in these subjects. Nevertheless, these data need to be completed and compared with the results for the remaining three tasks, to conduct a thorough analysis and understand the specific pattern for Argentinian Spanish in greater detail.

Impairments in verb retrieval in aphasia: A lexical-syntactic model
PRESENTER: Yuval Katz

ABSTRACT. Introduction

Theories of production deficits in types of aphasia are usually stated either in terms of lexical-retrieval, or in terms of sentence-building. However, these two processes must be interdependent. We examine an area of language that heavily relies on this interdependency – the retrieval morphologically-complex verbs. We focus on a phenomenon that involves lexical-retrieval, lexical-syntactic information, and syntactic structure: alternating-verbs (e.g., “closed” in 'Daniel closed the door and 'the door closed'). Whereas in English alternating-verbs usually sound the same (“close”), in Hebrew they usually share a consonantal-root but differ in their verbal-pattern (there are five verbal-patterns in Hebrew) each with a separate argument structure. For example, the alternating-verbs sagar and nisgar ('closedTransitive' and 'closedIntransitive' respectively) share the root SGR, related to closing, but have different verbal-patterns and argument-structures. sagar is used in sentences with two arguments ('Daniel closed the door'), whereas nisgar is used in sentences with one argument ('the door closed').

How are alternating verbs retrieved and inserted into the correct pattern and into a sentence with their arguments in the right syntactic positions?

 

Methods

We designed a battery of seven tasks assessing production of alternating verbs. We then tested 20 Hebrew-speaking individuals with various types of aphasia or developmental language-impairments, whose functional locus of impairment we diagnosed independently of verb-morphology, and compared their performance to each other and to control groups of non-impaired individuals. Finally, we inferred the function of each cognitive component in the production of morphologically-complex words based on patients' error-pattern.

 

Results

We report five error-patterns corresponding to five different functional loci of impairment in the proposed model. Based on this, we propose a lexical-retrieval model that considers morphology and other sentence-level considerations, elaborating on the role of each component in the production of morphologically-complex verbs in Hebrew.

Our model suggests that the conceptual stage includes the action and the participants, the semantic-lexicon retrieves the abstract verb, the syntactic-lexicon selects the correct alternant with the relevant arguments according to the selected participants. The selected abstract verb is inserted in the syntactic tree, according to the selected argument structure. Then, the phonological lexicon selects a matching phonological form, and the phonological buffer adds the phonological structure of the pattern.

 We show that impairments in early stages of retrieval – the conceptual-system, the semantic-lexicon and the syntactic-lexicon, cause (beyond other well-attested errors) verb-pattern substitutions only between verbs that are systematically related (alternating-verbs). When the deficit is in the syntactic-lexicon, the selected verb may also violate argument-structure-restrictions. Impairments in the phonological-output-lexicon cause substitutions with verbs sharing the root but not only alternating ones, but also verbs with no systematic alternation-relation (cava-'paint'↔hicbi'a-'vote'). Phonological-output-buffer deficits cause verb-pattern errors as well as tense and agreement errors.

 

Conclusions

Modeling sentence-building and lexical-retrieval as a unified process is not only conceptually desirable, but also useful for describing types of aphasia. Our results show that there is not one but many morphological impairments, and that production errors that may seem similar on the surface may stem from different underlying impairments in functional components that are usually not considered as involved in morphological processes. This has clinical implications for diagnosis and treatment.

A lesion-symptom mapping study of syntactic acceptability judgments in chronic post-stroke aphasia
PRESENTER: Danielle Fahey

ABSTRACT. Classically, expressive agrammatism has been correlated nonfluent Broca’s aphasia with frontal damage in lesion-symptom mapping (LSM) studies of people with stroke-based aphasia (PWA). Syntactic comprehension deficits have also been associated with Broca’s aphasia. However, some LSM studies have identified the posterior temporal and inferior parietal lobes in people with fluent aphasia, while others have associated damage to frontal networks with deficits in comprehension of non-canonical sentences. To test whether frontal pr posterior damage results in syntactic comprehension deficits, we performed LSM study using syntactic acceptability judgments, which assesses sentences' well-formedness rather than judging meaning. We predicted PWA would better detect word-order violation than agreement or subcategorization violations, and LSM to show an association between posterior temporal damage and comprehension deficits, but not frontal damage. To do this, we adapted an acceptability judgement task in two experiments using correct and matched ungrammatical sentences. We performed ANOVAs to identify differential accuracy by group (PWA v. age-match controls), violation type & sentence location. We also performed LSM analyses. 22 PWA and 16 controls have participated. PWA performed poorer in each category; ANOVA results for PWA showed effects of error type and location and an interaction between error type and location. Participants were more accurate on word-order violations, but unexpectedly more accurate on subcategorization violations than agreement violations. LSM analyses showed a significant association to a posterior temporal ROI but not an inferior frontal ROI consisting of Broca’s area. Our results speak against the hypotheses that frontal regions support syntactic comprehension, and support previous associations of syntactic comprehension deficits with damage to the posterior temporal lobe.

Parsing Trimorphemic Words in Context: Evidence from Aphasia
PRESENTER: Kyan Salehi

ABSTRACT. Introduction

Linguistic productivity relies on the ability to compute morphologically complex hierarchical structures. This ability is mostly determined by accessing knowledge of selectional restrictions of roots and affixes. For instance, in a word such as unsinkable the prefix un- attaches to the complex adjective sinkable, not to the verb sink (thus, ruling out *unsink). Conversely, in the case of unlockable, both morphological structures can be computed: [un[lockable]] “not able to be locked” or [[unlock]able] “able to be unlocked”. As such, the correct parsing of these trimorphemic structures directly determines the derived meaning. Few experimental studies have investigated the parsing and interpretation of these types of words in isolation and in context (de Almeida & Libben, 2005; Libben, 2003; Libben, 2006; Pollatsek et al., 2010), with results pointing to either right- or left-branching preference, with factors such as context and frequency affecting later, not initial stages of analysis. We investigated morphological parsing in individuals with aphasia aiming to understand (a) whether there is a default parsing strategy, (b) how sentential-semantic context influences parsing preferences, and (c) the breakdown of morpho-semantic processing across different clinical groups of aphasia.

 

Methods

Participants were 12 individuals with aphasia (3 fluent [FL], 2 mixed [MX], 2 mixed but predominantly non-fluent [MN], 5 non-fluent [NF]). Controls were 30 healthy individuals matched in age, sex, and education to the clinical groups. All participants were native speakers of English. Stimuli consisted of 48 sentences containing ambiguous trimorphemic words (e.g., unlockable), with 24 biasing towards the left-branching, 24 towards the right-branching analysis of the trimorphemic word (e.g., ‘When the zookeeper went to unlock/lockthe cage, he found it was unlockable’). In addition, materials included 24 sentences containing left-branching words ([[refill]able]) and 24 sentences containing right-branching words ([un[sinkable]]). These sentences were divided into two booklets, with each participant completing one booklet. Participants rated how good each sentence was on a 5-point scale (Rating task), and then were asked to indicate, by drawing a vertical line, where a separation could be made on a target word (Parsing task), which was always a word from the sentence presented below the rating scale.

 

Results

Correct parsing was analyzed by items considering word type (right-branching ambiguous, left-branching ambiguous, right-branching unambiguous, left-branching unambiguous) and group (controls, FL, MX, MN, NF), with repeated measures on the second factor. A cut before the suffix was considered correct in the case of left-branching trimorphemic words (e.g., [[unlock]able] and [[refill]able]) and a cut after the prefix for right-branching words (e.g., [un[lockable]] and [un[sinkable]]). Results showed no significant main effect of group (F (4, 55) = 1.62, p = .18, ηp2 = .11). However, both the MX and the NF groups differed significantly from the control group across most word types.

 

Conclusions

Results are consistent with previous online experiments (de Almeida & Libben, 2005; Pollatsek et al., 2010) suggesting that the right-branching parse is preferred early in morphological analysis. Notably, the NF group shows the inverse effect, indicating that the morphological parser can be affected in non-fluent aphasia.

Negative Concord in Neglect Dyslexia
PRESENTER: Alessia Rossetto

ABSTRACT. Introduction

Neglect dyslexia (ND), mostly caused by a lesion in the right hemisphere, leads to difficulties in reading the left part of words or sentences. In this condition, the grammatical structure of a sentence seems to modulate the exploration of written material. Thus Abbondanza et al. (2020) showed an advantage for left periphery. In the present study negative concord is considered. Negative concord, in Italian, is a structure characterized by two negative elements leading to a negative overall interpretation. For example, a negative adverb (e.g., mai - never, neanche - neither, nemmeno - nor) requires sentential negation at the beginning of the sentence (e.g., non - not). If spatial exploration is modulated by negative concord, we expect that the second negative element would trigger the search for the first negative element on the left. As a result, according to this hypothesis, sentences containing negative concord would be better read in ND with respect to sentences with single negation or no negation. Two different kinds of verb were analyzed: transitive and unaccusative verbs. Transitive verbs are characterized by the presence of an agent and a theme expressed by a direct object. Unaccusative verbs have a non-agentive subject which bears the thematic role of a theme, corresponding to the one expressed by the object of a transitive verb.

 

Methods

Patient ZE, a 61 y.o. businessman, with 8 years of education, had a tumor lesion in the right posterior temporal lobe, causing ND. He was asked to read sentences on a screen, one at a time. Sentences were matched for font and dimension, and they had a similar number of characters and words. There were 3 types of sentence: negative concord, single negation and no negation. Omissions of the left side of sentences were counted as neglect errors.

 

Results

ZE performed better in the negative concord (n=30/50) condition than in the other two conditions: single negation (16/50) - Fisher exact test value 0.0094, p < .01 -, and no negation (18/50) - Fisher exact test value 0.0272, p < .05 -, confirming the initial hypothesis. Only in the no negation condition the patient omitted fewer transitive verbs (8/24) than unaccusative verbs (20/26) - Fisher exact test value 0.0039, p <.01-.

 

Conclusions

ZE neglected fewer words when reading sentences with negative concord, as predicted by our hypothesis. A negative adverb triggers the search for the corresponding initial negation as the grammatical competence of the patient requires. Moreover, in the no negation condition, ZE clearly omits more unaccusative verbs than transitive verbs. In some languages (i.e., German), though not in Italian, omission of semantically simple unaccusative verbs is allowed: a sentence as Ich will nach Hause (I want to home) is understood as Ich will nach Hause gehen (I want to go home). In conclusion both negation and verb type effects showed that the linguistic competence of a subject influences the exploration of written space and that ND is not modulated by an impossible grammar.

Automatic syntactic processing in agrammatic aphasia: the effects of grammatical violations
PRESENTER: Minsun Kim

ABSTRACT. Introduction

There are two symptoms in aphasia that indicate impaired syntactic abilities: agrammatic production and asyntactic comprehension (Bastiaanse & Thompson, 2012; Goodglass, Menn &, Kean, 1976; Saffran, Schwartz, & Marin, 1980). While it seems intuitive that persons with agrammatic production would also have asyntactic comprehension due to a core or central syntactic impairment, evidence so far is mixed. Agrammatic productions and asyntactic processing do not necessarily co-occur in aphasia (Caplan et al., 2007; Caramazza & Zurif, 1976). The findings of prior research on asyntactic comprehension vary depending on the experimental task and syntactic contrasts used in the study. Additionally, the agrammatic participant group may not be clearly defined, using proxy terms such as nonfluent aphasia. When performance of persons with agrammatic aphasia is compared to neurotypical adults, as has been done in several studies, it does not tease apart a general effect of aphasia from syntactic deficits. Thus, it is unclear if syntactic deficits in aphasia result from a central, amodal breakdown that affects both comprehension and production. Recent theoretical views suggest distinct neural correlates for agrammatic production (left frontal) and asyntactic comprehension (left temporal) (Matchin & Hickok, 2020). The main goal of this research is to delineate the nature of asyntactic comprehension deficits, if any, in individuals with agrammatic production. We examined performance in offline and online comprehension tasks and compared this with a group of aphasic participants who did not have agrammatic production.

 

Methods

The study recruited three groups of participants:  agrammatic production (N=5), severity-matched non-agrammatic individuals with aphasia (N=7), and age and education matched neurotypical adults (N=9). Persons with agrammatic production were identified through narrative language analysis. Additional participants have been scheduled for data collection within the next one month. Participants engaged in two computer-based tasks in which sentences with and without syntactic violations were presented (modelled after Faroqi-Shah et al., 2020): word monitoring, which is sensitive to online detection of syntactic violations, and auditory sentence judgement, which measured offline decisions about sentence well-formedness. The stimuli consisted of sentences with and without morphosyntactic (tense and word category) and semantic violations. Reaction time differences to word monitoring of sentences with and without syntactic violations gave a word monitoring effect. Sensitivity to offline judgments was computed using D-prime (Macmillan & Creelman, 1991). Group (Kruskal-Wallis test) and single-subject (Crawford & Garthwaite, 2002) analyses were conducted.

 

Results

Group analyses showed differences for word category violations across both tasks. Single subject statistics showed deficits in a subset of both agrammatic and non-agrammatic participants. It should be noted that there were no significant differences between the two aphasic groups for both tasks in Mann-Whitney U test.

 

Conclusions

Although off-line sentence judgment and on-line sentence processing were impaired in agrammatic aphasia, the extent of these deficits did not differ from non-agrammatic group. Thus, the current study does not support an “amodal” syntactic deficit in individuals with agrammatic production. This finding is consistent with views that identify distinct neural resources for sentence planning for production and sentence interpretation during comprehension (Matchin & Hickok, 2020).

Identification and Production of Chinese Classifiers by Stroke Aphasia Patients
PRESENTER: Yuying Liang

ABSTRACT.        Stroke to individuals is often accompanied with an impairment of the ability to produce, comprehend, or repeat language. Based on observations of recent test results on spontaneous speech using a Chinese translated version of WAB by Shanghai Sunshine Rehabilitation Center, patients with aphasia encounter difficulties when they try to use classifiers. They tend to overuse the ge” () and even use ge” to replace other classifiers when they are dealing with noun phrases. Since Chinese is a typical classifier language, use of classifiers are unavoidable and can be seen as auxiliary indicator for condition assessment. The research question is whether brain damage to patients with aphasia also injures their ability of word categorization.

Different from English where there are only measure words, there are both functionally based classifiers and measure words in Chinese. Therefore, in order to better understand the nature of categorization in a classifier system, it is important to differentiate classifiers from measure words. Especially for patients of Chinese aphasia, measure words were consciously avoided when experimental trials were designed.

 

       Seventeen classifiers are selected, each followed with two typical noun collocations. Each trial uses a classifier with one noun as identification and another as production task. Pictures with text prompts are shown to participants to repeat the correct phrase or fill classifiers in the blanks. The distractor in identification trials uses the same noun as the correct one with a classifier with rather low frequency of co-appearance in Chinese language. Numbers used in production trials vary randomly from one to five in order to suggest the patients of using a quantifying classifier for the blank, and meanwhile avoid causing excessive counting burden.

Six patients with Chinese aphasia due to strokes, aged around 65 years old, were encouraged to participate in this experiment. These participants were all fluent native Chinese speakers before encountering aphasia and have received rehabilitation training sessions. Data from people with normal language abilities were also collected as a base line for analysis. Subject chosen had similar age and educational background with the patients, mostly their family members.

Patients recognize most classifiers with a correctness of 91% and could produce 71% expected classifiers. Performances of those who have normal language abilities reach 95% for identifying and 85% for production task. It appears that even with a large degree of recovery on language fluency, patients still have difficulties identifying and producing classifiers. Those with higher education recovered better and faster on word categorization. Besides, it can be easier for patients to recall appropriate classifiers given abundant context. Therefore, practices with descriptive context are expected to help with patients’ language sensitivity and proficiency recovery.

An assessment of the Resource Reduction Hypothesis for sentence processing in aphasia: a visual word study in German
PRESENTER: Dorothea Pregla

ABSTRACT. Introduction

Sentence comprehension performance in individuals with aphasia (IWA) is found to bevariable but performance for structurally complex sentences is often reported to be systematically more impaired than for simpler sentences (e.g., Caplan et al., 2013). The Resource Reduction Hypothesis (RRH, Caplan, 2012) takes both the variability in performance and the effect of structural complexity into account. According to this hypothesis, performance depends on the resource capacities of a given individual and on the resources a sentence requires for processing. If the available resources meet the demands, sentences are processed in a normal-like fashion, otherwise sentence processing is impaired. Importantly, the resources in IWA randomly fluctuate leading to variability in performance. Based on the RRH, we derived predictions for the fixation behavior of IWA in the visual world paradigm. Specifically, we assessed structural complexity effects and variability in sentence comprehension in IWA.

 

Methods

We included 21 IWA (mean age = 60.2, range = 38–78 years, 1–26 years post onset) and 50 control participants (mean age = 48, range = 19–83 years), all native speakers of German. Sentence comprehension was assessed in a variant of the visual world eye-tracking paradigm by using an auditory sentence-picture matching task with two pictures (see Hanne et al., 2011). Sentence complexity effects were investigated in declarative sentences, relative clauses, and control structures with a pronoun or PRO. Variability in the performance was investigated by comparing results between a test and a retest phase separated by two months. Proportion of looks to the target were analyzed using Bayesian hierarchical generalized linear models with the predictors test phase, sentence complexity, and time bin.

 

Results

Both groups showed increases in target fixations over 50%. However, the increase in target fixations was higher in the control group compared to IWA across sentence types and test phases. Structural complexity effects: In the control group, target fixations in simple sentences exceeded the target fixations in complex sentences in all sentence types with the exception of control structures with a pronoun. In IWA, target fixations between simple and complex sentences diverged only in declaratives. Variability between test phases: The control group showed earlier target fixations in the retest phase in declaratives and in non-canonical relative clauses. The IWA showed later target fixations in the retest phase in non-canonical declaratives.

 

Conclusions

The finding that IWA showed increased target fixations in correct trials across all sentence types might suggest that sentence processing in correct trials is normal-like. This interpretation is supported by the RRH. However, three of our results speak against such a conclusion: 1) the increase in target fixations was lower in IWA than in controls, 2) IWA showed fewer structural complexity effects, and 3) IWA exhibited later increases in target fixations in the retest phase as compared to the test phase. We will consider how these differences between groups might be explained. Following Cope et al. (2017), we suggest that sentence processing in aphasia is guided by inflexible top-down predictions.

Assessing verb-argument structure and syntactic complexity in aphasia with the Italian version of the Northwestern Assessment of Verbs and Sentences (NAVS-I).
PRESENTER: Elena Barbieri

ABSTRACT. Introduction

Verb production in agrammatic aphasia is more impaired for verbs with complex (vs. simple) verb-argument structure (VAS, [1]). Namely, verbs requiring 2 or 3 arguments are more difficult to produce than 1-argument verbs, and optionally transitive verbs may be more difficult to produce than 1-argument verbs [2]. In agrammatism, production of non-canonical sentences with Object-Verb-Subject (OVS) order is also more impaired than that of canonical (SVO) sentences [1].  Building on these findings, the Northwestern Assessment of Verbs and Sentences (NAVS, [3]) was developed to evaluate verb and sentence production and comprehension in aphasia. Results from English and German participants with aphasia show that the NAVS is able to capture effects of VAS and syntactic complexity in both agrammatic participants [1] and individuals with mild (residual) forms of aphasia [4].

 

Methods

Forty-four healthy participants (age range: 41-84) and 28 participants with aphasia (age range: 30-84) took part in the study. Sixteen were diagnosed with fluent aphasia (Wernicke’s, Conduction, or residual) and 12 with non-fluent (Broca’s) aphasia, based on language assessment ([5-6]).

All participants were administered a paper-and-pencil form of the Italian version of the NAVS (NAVS-I, [7]), which was adapted from the original NAVS [1].

 

Results

All but two participants with aphasia were significantly impaired (vs. healthy) on one or more subtests of NAVS-I, based on Crawford’s statistic procedure.

Mixed-effects regressions showed, for the nonfluent group, better production of 1- (vs. 2-) argument verbs on the VNT, and no effects of VAS complexity on the ASPT. For the fluent group, verb production was more impaired for 3- (vs. both 1- and 2-) argument verbs on the VNT, although such differences disappeared when verb frequency and imageability were introduced as covariates, and on the ASPT. No effect of argument optionality was found.

Both fluent and nonfluent groups showed better production and comprehension of canonical (vs. non-canonical) sentences. In production (SPPT), the canonical advantage was greater for longer than shorter sentences for nonfluent participants, and for people with lesser (vs. greater) aphasia severity (as measured by the Token Test [8]) in the fluent group. On the SCT, comprehension of object-relative sentences was significantly more impaired than that of subject-relative sentences in nonfluent, but not fluent, participants.

 

Conclusions

Results indicate that verb production – in both nonfluent and fluent aphasia - is affected by VAS complexity, in line with previous studies [3-4, 9]. Contrary to some findings [2-3] and in line with data obtained from healthy participants [7], argument optionality did not influence verb production in either group, suggesting that with respect to Italian participants optionally-transitive verbs are not stored with two VAS representations in the lexicon.

Syntactic complexity, both in terms of NP-movement (passives) and Wh-movement (object clefts, object relatives), affected sentence production and comprehension in both nonfluent and fluent aphasia. However, while a canonical advantage in production was found in all nonfluent participants independently of aphasia severity, this only emerges in mild (residual) forms of fluent aphasia (see [4]), i.e., when lexical retrieval is relatively spared.

Structural priming of active and passive sentences in Italian speakers with aphasia
PRESENTER: Giulia Bencini

ABSTRACT. Introduction

Structural priming is the tendency for speakers to produce previously processed sentence structures, even when the structures are syntactically more complex than equally suitable alternatives to express the same meanings (Bock, 1986; Pickering & Ferreira, 2008; Branigan & Pickering, 2016; see Mahowald et al. 2016 for a metanalysis and a review). Structural priming has been conceptualized and modeled as a form of implicit learning within the language processing systems (Bock and Griffin, 2000; Chang, Dell, & Bock, 2006; Chang, Janciauskas, Fitz, 2012).

On some accounts, the difficulties people with aphasia (PWA) have with complex sentence structures such as passives are the consequence of processing difficulties, rather than a loss of linguistic representations per se (see Thompson et al., 2015 for a review). Consistent with a processing account, studies have found that structural priming in speakers with agrammatic aphasia results in facilitated access to primed sentence structures (e.g., Cho-Reyes et al., 2016; Hartsuiker & Kolk, 1998; Lee & Mann, 2017; Lee et al., 2019; Rossi, 2015; Saffran & Martin, 1997; Verreyt et al., 2013). The aims of this study are to examine structural priming in aphasia in Italian and to explore priming in a broader range of aphasia types.

 

Methods

Participants

We present data from four PWA resulting from a left hemisphere stroke. Participants were assessed with the Italian version of the Aachener Aphasie Test (Luzzatti et al., 1996). They were also assessed on written and spoken sentence comprehension; confrontation naming and picture description; written and spoken grammatically judgments (Batteria per l’Analisi dei Deficit Afasici; Miceli et al., 1994); passive comprehension (Psycholinguistic Assessment of Language Processing in Aphasia; Kay et al., 1996). Participants were all right-handed, had no visual or hearing impairments and no prior neurological or speech-language disorders.

Materials and procedure

32 transitive prime sentences (half actives, passives) were paired with 16 target pictures. Agents were always inanimate (e.g., rock) and patients were animate (e.g., boy). Target pictures were scene sketches of transitive events with inanimate agents and animate patients with the infinitive form of the verb written below. Prime sentences were audio-recorded and played via headphones. The experiment was computerized using Psychopy software (Peirce, 2007). Participants repeated each prime sentence and described each target picture.

 

Results

PWA showed a priming effect for actives and passives. They produced more actives after active primes (.38) than after passive primes (.11) or no primes (.20) and more passives after passive primes (.67) than after active primes (.38), or no primes (.50).

 

Conclusions

Our results indicate that structural priming is effective in Italian speakers with aphasia. This adds to the growing body of research that finds that priming in speakers with aphasia facilitates access and use of primed sentence structures, even when these are linguistically more complex than their alternatives. These results are consistent with a processing account of the linguistic deficits in aphasia (Thompson et al., 2015) and open up the possibility of using interventions based in structural priming paradigms in language rehabilitation, across aphasia types.

Processing of reflexive anaphors in Turkish aphasia: an eye-tracking during listening study
PRESENTER: Seckin Arslan

ABSTRACT. Introduction

People with aphasia (PWA) present impairments working out who pronominal elements refer to (see, Arslan, Devers, & Ferreiro, 2021 for a review). While some studies have shown that reflexive anaphors (i.e., oneself) are rather retained in aphasia (e.g., Grodzinsky et al., 1993) some others have shown that reflexives are similarly impaired on a par with other pronoun variables (Choy & Thompson, 2010; Edwards & Varlokosta, 2007) or are processed at a slower than normal speed (Burkhardt et al., 2008). Turkish has a curious case of two types of reflexives: kendi ‘oneself’ which is assumed to behave as a local reflexive and kendisi which is rather unconstrained in its behavior (Kornfilt, 2001). However experimental data reveal that both kendi and kendisi show a flexible binding relationship as local/long-distance reflexives (Gračanin-Yuksek et al., 2017). This study examines whether and how Turkish reflexive system is affected in aphasia.

 

Methods

Four individuals with non-fluent aphasia (all male, age = 57) and 22 non-brain-damaged controls were recruited (13 females, age = 42). The cognitive profiles of these individuals were screened with the Turkish version of the Token Test App (Arslan et al., 2020), the Test Your Memory task (Maviş et al., 2015) and the digit span tasks (Wechsler, 2008). An eye-movement monitoring during listening experiment was administered in which the participants listened to 48 sentences across four conditions. A two-by-two fully crossed design was used; we manipulated contextual referential bias to local/non-local antecedents being potentially bound by kendi and kendisi reflexive elements, see (i). The participants were asked to click/point to the person referent that the reflexive anaphor refers to.

 

  1. Bir [hemşirenin/doktorun] tutuklandığı davada, Hemşire doktorun [kendini/kendisini] savunduğu vurguladı.

‘At the court a nurse/doctor was arrested, the nurse emphasized that the doctor was defending kendini/kendisini-oneself’.

 

Results

The end-of-sentence response data have shown that the PWA had a strong non-local interpretation for both the reflexive conditions, while the controls considered local antecedents for ‘kendi’ more frequently than ‘kendisi’ reflexives, as evidenced with a significant Group×Reflexive interaction (ß=-1.27, SE=0.42, z=-2.99, p=0.002). Eye-movement data were analyzed using the growth-curve approach (Mirman, 2017) with separate models for proportion of looks variables to local (doctor), non-local (nurse) and distractor referents (genitor). We found that for kendi reflexive, the PWA had reduced target fixations in both local and non-local antecedents than the controls (p<0.025). For kendisi, we found no group differences in local fixations (p=0.42) while the controls significantly fixated more often on non-local antecedents than the PWA did (p=0.037).

 

Conclusions

This study set out to explore moment-by-moment processing of reflexive anaphors in Turkish aphasia. One conclusion we can arrive here is that the PWA resort themselves to a non-local interpretation of reflexive anaphors, signalling that the locality constraint on kendi reflexive, is only loosely applied in syntactic comprehension in aphasia. Our data present an opposing picture to the theory which posits that reflexives are retained in aphasia because they refer to local, and hence structurally closer, referents.

On the comprehension of relative clauses in mild AD: The role of feature mismatch in the subject and object DPs
PRESENTER: Dimitra Arfani

ABSTRACT. Introduction

Studies in Alzheimer’s disease (AD) have observed difficulties in sentence comprehension, particularly in structures with non-canonical argument order, such as object-extracted relative clauses (ORCs) (e.g., Emery, 1985; Marková et al., 2017; Molympaki et al., 2013). However, these difficulties occur in a moderate stage of the disease, whereas in an early stage lexical-semantic deficits are often present (Taler & Phillips, 2008). Lately, difficulties in the comprehension of non-canonical structures in people with acquired language deficits, like agrammatic aphasia, have been discussed within the Relativized Minimality (RM) approach (Rizzi, 1990, 2004). According to this approach, ORCs are hard to comprehend when the moved and the intervening subject DP carry similar φ-features, whereas mismatch in φ-features between the two DPs facilitates comprehension (Garaffa & Grillo, 2008; Grillo, 2009). However, it has been argued that only features of the verbal inflection system that trigger syntactic movement count in the computation of minimality (Friedmann et al., 2017; Terzi et al., 2018). The extent to which comprehension of ORCs in AD can be accounted within RM remains unknown. This study aims to test the RM approach in mild AD by experimentally manipulating syntactically active φ-features (i.e. number in Greek) and lexical-semantic φ-features (i.e. gender in Greek) in the comprehension of ORCs.

 

Methods

Twenty-seven Greek-speaking individuals with mild AD (MMSE score: 18-26, 65-86 years old) and 27 age- and education-matched healthy adults (MMSE score: 28-30) were administered an off-line sentence comprehension task that manipulated number x gender in a within-subjects nested design. Stimuli consisted of 80 object right-branching relative clauses. In 20 of the sentences the subject and the object DP were singular and had the same gender value (number match/gender match), in 20 sentences the two DPs had same gender but different number values (number mismatch), in 20 sentences they had same number but different gender values (gender mismatch) and in 20 sentences they had different number and gender values (number mismatch/gender mismatch).

 

Results

Healthy controls showed no interaction but a significant main effect of number (Repeated Measures ANOVA F(1,26)=15,921, p=.000, ηp2=0.380), as they presented better performance in number mismatch. However, participants with AD showed an interaction between number and gender (F(1,26)=17,196, p=.000, ηp2=.398) and a significant main effect of number (F(1,26)=10,506, p=.003, ηp2=.288). Better performance was observed in sentences with number mismatch and gender match (80.4%) and worse performance was observed in sentences with number match and gender match (66.7%). On the other hand, gender mismatch was better in sentences with number match (77.4%) but worse in sentences with number mismatch (71.5%).

 

Conclusions

Features triggering syntactic movement (i.e. number in Greek) are accessible to individuals with mild AD and to healthy elderly, as their comprehension of ORCs improved in mismatch conditions. However, individuals with mild AD present an impairment in lexical features like gender, which manifests even when syntactic comprehension is easier, as in the number mismatch conditions.

Syntactic comprehension abilities of Slovenian-speaking individuals with Alzheimer’s Disease

ABSTRACT. Introduction

Probable Alzheimer’s disease (pAD) is characterized by difficulties in language processing usually related to word level and processing of meaning. Syntactic abilities are thought to remain relatively preserved even though recent studies have pointed towards a syntactic deficit as well, related to increased structural complexity (Emery, 2000). Specifically, individuals with AD have demonstrated comprehension deficits with object-extracted sentences -OVS (Markova et al, 2017), reversible passives and non-canonical structures (Grober and Bang, 1995) as well as centre-embedded object sentences (Grossman et al., 1996). More recently, AD individuals were found to be worse in object-extracted relative clauses compared to subject-extracted relative clauses (Molympaki et al 2013). In the majority of these studies, the observed syntactic deficits were attributed to working memory deficits. The present study aims to assess syntactic abilities of Slovenian-speaking individuals in comprehending centre-embedded relative clauses (CE-RC), wh-questions and non-canonical sentences (OVS). Slovenian is a highly inflected, morphologically rich language where grammatical morphemes clearly code the function of each word in the sentence.

 

Methods

Up to now, 4 participants with mild-to-moderate pAD (MMSE score: 16-25; age: 70-84) and 10 healthy controls participated in a sentence-picture matching task. Participants had to match a total of 60 sentences with pictures depicting the corresponding action. Each action belonged to one of the following types of sentences: CE-RC (n=20), wh-questions (n=20) and OVS sentences (n=20). For each trial, they were presented with three pictures (two in the case of wh-questions) and were instructed to choose the one which best describes the sentence given to them.

 

Results

A general observation is that participants do manifest a syntactic deficit, as they perform either at chance or below chance at the comprehension of structures under investigation. Specifically, all four individuals performed below chance when it comes to CE-RC, while they were at chance when it comes to wh-questions and OVS-sentences for which individual variability is also observed.

 

Conclusions

The current pilot results indicate a syntactic impairment in Slovenian-speaking individuals with pAD when it comes to the comprehension of complex structures. The dichotomy between CE-RC on the one hand, and wh-questions & OVS sentences on the other, could be attributed to the different demands CE-RC pose on the memory mechanism supporting syntactic comprehension. Comprehension of CE-RC appears to be more difficult compared to the comprehension of referential (wh-questions) and non-canonical structures in terms of argument realization (OVS). Thus, it appears that there is a hierarchy of complexity which is reflected in pAD performance. As further data will be collected and additional analyses will be performed, we will have a better idea as to what might trigger this dichotomy. 

Spoken discourse characteristics of Bengali Speakers with Alzheimer's Disease: A comparison of picture description and story narrative tasks
PRESENTER: Manaswita Dutta

ABSTRACT. Introduction

            Deficits in spoken discourse have been documented in individuals with Alzheimer's Disease (AD, Duong et al., 2003; Fleming & Harris, 2008); majority of the studies are from English speaking participants (Slegers et al., 2018). Consequently, our understanding of discourse impairments in languages that are different than English remains limited. Bengali is a pro-drop highly inflected language from the Indo-Aryan language family (Dash, 2015). It is the seventh most spoken language in the world, yet, to date, no studies have investigated spoken discourse characteristics of Bengali individuals with AD. The current study aimed to compare and identify differences in spoken discourse performances elicited using two discourse tasks in Bengali AD and matched healthy controls (HC).

 

Methods

            Six individuals with AD (mean = 66.83, SD = 11.28) and six age-education- and gender-matched HC (mean = 70.33, SD = 4.22) participated. All participants described the Western Aphasia Battery (WAB) picnic scene and retold the Frog story. Language samples were analyzed in terms of productivity, lexical, semantic, and morphosyntactic aspects using the Quantitative Production Analysis and Correct Information Unit (CIU) analyses. Performances were compared between groups using non-parametric statistics.

 

Results and Discussion

            Our results demonstrate that compared to picture description, the Frog story task was more sensitive in precipitating linguistic differences between both groups. Specifically, in line with prior AD research in English (e.g., Ash et al., 2007; Sajjadi et al., 2012), Frog story showed significant group differences across all domain measures (i.e., reduced productivity,  simplified syntactic complexity, and impoverished semantic content). Interestingly, in contrast to studies documenting overuse of pronouns and inflectional errors in AD (e.g., Ahmed et al., 2012; Fraser et al., 2015), the Bengali individuals with AD demonstrated a smaller proportion of pronouns than HC and no noun or verb inflectional impairments. In comparison, picture description differences were observed for the proportion of well-formed sentences and CIU measures; most participants mainly listed the picture elements (Garrard & Forsyth, 2010). Importantly, the most common domain of impairment between the two tasks was the semantics characterized by reduced semantic content and efficiency. Therefore, picture description tasks can be a valuable tool to assess semantic impairments in AD (Mueller et al., 2018; Sajjadi et al., 2012) whereas narrative tasks elicit richer language, thus can be useful in comprehensively documenting the linguistic impairments in languages which has yet not been explored in depth with neurological impairments. 

 

Conclusions

            This study represents the first of its kind to characterize spoken discourse productions of Bengali AD participants revealing similarities with the English-speaking patients, but also demonstrates differences in language specific patterns. Further, our findings indicate that narrative tasks are more sensitive in revealing linguistic differences between AD and HC at the lexical, morphosyntactic, and semantic levels. Thus, relying solely on picture description tasks may not be sufficient for assessing spoken discourse of individuals who speak languages that are structurally different than English.

Application of Perceptual Rating Features to Measure Functional Discourse in Aphasia
PRESENTER: Katherine Bryan

ABSTRACT. Introduction

Discourse abilities are frequently impacted by aphasia, and there are several potential advantages to analyzing discourse in this clinical population, including (a) its sensitivity to a wide range of speech and language disturbances, (b) its ability to detect difficulties in integrating linguistic processes, and (c) its representativeness of naturalistic communication. Casilio and colleagues (2019) recently created the auditory-perceptual rating of connected speech in aphasia (APROCSA) system, which helps overcome barriers to discourse analysis (e.g., limited time and resources) by combining advantages of both quantitative linguistic analysis and qualitative rating scales. However, the APROCSA currently lacks measures of functional and pragmatic language use in discourse.

This study aims to expand upon the existing structural APROCSA features by identifying valid perceptual features that characterize functional and pragmatic aspects of discourse abilities in adults with chronic post-stroke aphasia. We aim to evaluate (a) reliability of functional perceptual feature ratings and (b) concurrent validity of these feature ratings with respect to commonly used quantitative linguistic measures (e.g., percent correct information units) identified a priori as theoretically and conceptually related.

 

Methods

To expand upon the existing APROCSA system and enable subsequent within-subject analyses, we used the same 5-point perceptual rating scale and included the same group of 24 participants as Casilio and colleagues (2019). Audiovisual language samples were drawn from the Free Speech Sample portion of the English PWA Protocol Data within the AphasiaBank database (MacWhinney et al., 2011). We identified perceptual features intended to characterize functional and pragmatic aspects of discourse abilities in aphasia based on a literature review, which demonstrated that people with aphasia tended to differ in their reference to agents (Olness et al., 2012), extent of elaboration (Kong et al., 2018), local and global coherence (Andreetta et al., 2012), appropriate pragmatic communication behaviors (Irwin et al., 2002), and utilization of symbolic gestures (Olness et al., 2012), which may impact the overall informativeness of connected speech (Leaman & Edmonds, 2019). Following a calibration protocol, three experienced aphasia researchers will score nine perceptual variables—unclear or omitted referents, omission of essential information, disruption of flow, off-topic, inappropriate conversational pragmatics, hyper-responsiveness, gestural meaningfulness, reluctance to speak, and completeness of message—on a 5-point scale. A primary coder will record four linguistic variables– percentage of pronominal referencing errors, percentage of local coherence errors, percentage of global coherence errors, and percent correct information units– using the Child Language Analysis (CLAN) program. A reliability coder will follow the same procedure for six random samples.

 

Results & Conclusion

Analyses will be conducted using the software package R. Interrater reliability will be calculated for each variable of interest via intraclass correlation coefficients (ICCs) and correlations between indices of perceptual features and their linguistic counterparts will be calculated using Pearson correlations. The rating procedure and analyses will be completed by June and July, respectively. We predict that there will be strong interrater reliability for functional perceptual feature scores and that functional perceptual feature rating scores will be positively correlated with theoretically-related quantitative linguistic measures.

The relationship between language impairment and narrative organisation: New methods to measure deviation from the “typical structure”

ABSTRACT. Introduction

Effective communication involves complex structures at different levels, including organization within utterances (microstructure), and narrative discourse emerging from the combination of utterances (macrostructure). Investigations into the relationship between micro- and macrostructure help understand the impact of aphasia on communicative success. Previous investigations primarily focused on number of errors on both levels (e.g. grammatical errors, coherence errors), measures of informativeness, and modalising behavior, such as highlighting important aspects of the narrative (Andreetta et al., 2012; Andreetta & Marini, 2015; Linnik et al., 2016; Olness et al., 2010). Our study applies a new frequentist approach, which, instead of deciding whether a part of the narrative contains an error, determines how typical each chunk of the narrative is in relation to the control group.

 

Methods

We collected “Dinner Party” comic narrative samples from 20 English speaking people with aphasia (convenience sampling), and 30 controls. At the level of microstructure, we examined a range of variables including Word Count, Mean Length of Utterance, language errors (split into lexico-semantic and grammatical errors), and grammatical complexity. At the level of macrostructure, we were interested in the basic propositions narrating the story (e.g., “the man washes the dishes”), and qualitative descriptors, which add to the narrative by adding evaluation and judgment (e.g., “he is not going to get away with that”). Based on control data, we made a list of basic propositions and measured their frequency, using concepts from the artificial language learning literature (Knowlton & Squire, 1994). The new variable, “Associative Chunk Strength” (ACS), captures how typical a given part of the narrative is, in reference to the control data. We further counted the number of Qualitative Descriptors in each sample and computed a ratio by dividing the number by word count.

 

Results

As expected, participants with aphasia differed significantly on several microstructural variables. Participants with aphasia had a shorter MLU, produced less complex grammatical structures, and made more referential and syntactic errors (all p < .001). At the macrolevel, the groups did not differ in their structure of basic propositions (p = .20), but produced fewer qualitative descriptors (p = .02). Backward stepwise regressions identified MLU as the microstructure variables that was the best predictor of ACS and the amount of qualitative descriptors in speakers with aphasia (p < .001).

 

Conclusions

Despite substantial language impairment, the basic narrative organization of narrative samples was similar between groups. However, speakers with aphasia showed a marked decrease in evaluating statements, which strongly support communication. Deviations from controls are predicted most strongly by MLU.

Comparison of Main Concept and Core Lexicon Productions between the Modern and Original Cookie Theft Stimuli in Healthy Control Participants

ABSTRACT. Introduction

Discourse analysis provides important insight into linguistic and cognitive function and can provide clinicians insights into functional communication abilities that standard assessments do not. Until recently, discourse analysis procedures have relied on timely transcriptions for analysis. However, recent research (Dalton, Hubbard, & Richardson, 2020) has established the utility of main concept analysis (MCA) and core lexicon analysis (CoreLex) in clinical language assessment. MCA compares the completeness and accuracy of story concepts to a normative sample, while CoreLex compares typicality of word choice. The current study presents the normative MCA and CoreLex checklists for the original (Kaplan, et. al, 2001) and modern (Berube, et al., 2019) cookie theft pictures.

 

Methods

Forty-five transcripts for the original cookie theft stimulus were retrieved from the AphasiaBank database, and an additional 48 transcripts for the modern cookie theft stimulus were contributed by author SB. Development of main concepts followed previously published procedures (Richardson & Dalton, 2015). Briefly, a list of all relevant concepts was created from the transcripts for each task. We then tallied the number of times a relevant concept was produced across transcripts. All relevant concepts that 33% of the sample or more produced were considered main concepts. Similarly, to identify the core lexicon for the two tasks the procedure outlined in Dalton & Richardson (2015) was followed. Lists of all lemmas in each transcript were created. The frequency of occurrence of lemmas across transcripts was calculated, and any lemma that was produced by 50% or more of the sample was included as a core lexicon item.

 

Results

Main concept analysis for the modern and original cookie thefts yielded 9 and 14 individual main concepts, respectively. Six out of nine concepts for the original cookie theft were also present in the modern cookie theft, while the modern cookie theft yielded eight additional, unique concepts. CoreLex lists were generated for both stimuli. Twenty-six and forty-one lexical items were identified from the normative samples for the original and modern cookie theft, respectively. Nineteen lexical items were shared across the lists.

 

Conclusions 

The modern cookie theft stimulus incorporates new characters and actions and is visually richer. The instructions also differ from the original, where participants describing the modern scene are asked to talk about the picture as if describing to a person who was blind. This instruction seems to be effective in eliciting longer descriptions, made up of more main concepts and core lexical items. This work demonstrates that image complexity and task instructions impact task performance in a normative sample. MCA and CoreLex are sensitive across clinical populations and are quick, functional assessments of communicative ability (e.g., Dalton & Richardson 2019; Dalton, Hubbard, & Richardson, 2020). The creation of MCA and CoreLex checklists for the original and modern cookie theft images will allow researchers and clinicians to compare performance between various clinical populations as well as directly compare performances across stimuli, which is important given the extensive use of the original cookie theft for comparison to previous research.

The accuracy-fluency trade-off in non-fluent aphasia
PRESENTER: Halima Sahraoui

ABSTRACT. Agrammatic utterances have traditionally been characterized by their reduced morpho-syntactic complexity. Besides, fluency disturbances – including pauses, false starts, self-corrections, abusive use of fillers and pragmatic operators, repetitions, revisions – are frequent, thus making speech production visibly effortful. Results from previous corpus analyses confirm that variability in agrammatic performance is a key feature for understanding not only the underlying impairment, but also the strategic language use (Kolk & Heeschen, 1990; Sahraoui & Nespoulous, 2012). Across-task variability is a result of the use of differential adaptation strategies related to the amount of focus on form, which may enable a better grammatical accuracy under certain conditions. Based on this observation, we hypothesize that agrammatic speakers may over-use monitoring skills in language production at a pre- or post-articulatory stage and that this monitoring can make fluency vary in function of the type of task. To understand how speech (non)fluency interacts with morpho-syntactic well-formedness, we looked at a corpus from 5 agrammatic and 9 control speakers in different tasks : spontaneous speech, narrative, picture description and sentence production. Results show less dysfluencies and revisions in free speech compared to picture description and targeted sentence production. We thus conclude that task-sensitive accuracy and fluency is also due to the intervention of pre- or post-articulatory speech monitoring and executive functions, leading to trade-offs between fluency and grammatical accuracy in support of the adaptation theory.

References

Kolk, H. H. J. & Heeschen, C. (1990). Adaptation and impairment symptoms in Broca's aphasia. Aphasiology, 4(3), 221-231.

Sahraoui, H. & Nespoulous, J‐L. (2012). Across‐task variability in agrammatic performance. Aphasiology, 26(6), 785-810.

Effects of lexical frequency and collocation strength of word combinations on speech pause duration of individuals with and without aphasia

ABSTRACT. Introduction

In aphasia, an increase in number of pauses and pause duration (PD) contributes to communication difficulties. Pauses in speech reveal neurocognitive processes underpinning language production (Butterworth, 1979). Previous studies have found that PD was lower before words with higher frequency (Beattie & Butterworth, 1979; Goral et al., 2010). However, frequency also manifests as collocation strength between words. Stronger collocations may be processed more holistically, reducing processing effort, and speakers with aphasia tend to produce more strongly collocated combinations (Bruns et al., 2019, Zimmerer et al., 2018). In this study, we investigated the effects of Lexical Frequency and Collocation Strength on PD in narrations of individuals with aphasia (IWA) and neurotypical controls (NC). We predicted pauses would be shorter before words of higher frequency, or within stronger collocations.

 

Methods

20 NC and 20 IWA narrated the “Dinner Party” comic (Fletcher & Birt, 1983). Aphasic participants presented with a range of impairments and severities, including both fluent and non-fluent profiles. Transcriptions were annotated using ELAN (Max-Planck-Institute for Psycholinguistics, 2020). We set no minimum duration for pauses, and values could be zero (no pause before a word). Lexical Frequency and Collocation Strength (measured as t-scores) were analysed using the Frequency in Language Analysis Tool (FLAT; Zimmerer et al., 2018). We further determined word category using the R package "Spacyr" (Benoit & Matsuo, 2018).

 

Results

Linear mixed effect models revealed that IWA showed longer PDs (p < .001): IWA produced longer pauses before function words (p < .001). If word category was considered, Lexical Frequency effects were not significant (p = .983). However, the effect of collocation was, and PDs were shorter within stronger collocations (p < .001). There was an interaction between Group and Collocation Strength, with greater effects of Collocation Strength in IWA (p < .001). An effect size analysis showed greater Coefficient standard of Collocation Strength for IWA group (NC= β (-0.11); IWA = β (-0.31); p < .001). For Lexical Frequency, the Coefficient standard was greater in NC group (NC= β(0.14); IWA = β(0.03), p < .001).

 

Conclusions

PD was influenced by Collocation Strength, supporting the view that strong collocations reduce processing demands. Collocation Strength had a bigger effect on pauses in aphasia, suggesting that as analytic capacities decrease, statistical properties, such as Collocation Strength, exhibit a greater influence on language production. Frequency-based approaches may be valuable in explaining patterns of preservation and impairment in aphasic language production.

Macrostructural aspects of narrative discourse in left- and right-hemisphere stroke in Brazilian Portuguese speakers with low education

ABSTRACT. Introduction

In Brazil, 70% of stroke survivors do not return to their professional activities and 50% needs support to accomplish their daily activities (Agência Brasil, 2020). Also, people with lower socioeconomic status (SES) - measured by the level of education, occupation and income -  present lower post-stroke functional outcome (e.g., Avan et al., 2019). Until today, limited research has been conducted in post-stroke patients with low SES. To address this gap, discourse analysis offers a multidimensional ecological evaluation of language (e.g., Bryant et al., 2016). Traditionally, macrostructural language processes have been associated with the right hemisphere (RH) (Myers, 1999) and microstructural processes with the left hemisphere (LH) (Barker et al., 2017). Discourse can be analyzed by focusing on the microstructure of language (e.g., word classes) or the macrostructure (e.g., coherence), or even by using measures at the interface of micro- and macro-structural structures (e.g., lexical informativeness) (Armstrong, 2000).

The first aim of this study was to determine if patients who suffered from a stroke in the LH and in the RH differ from participants with no brain damage for macrostructural processes and lexical informativeness in narrative discourse production. The second aim was to explore the relationships between the discourse measures and cognitive and sociodemographic measures

 

Methods

Thirty-one individuals who had an ischemic stroke in the LH (n=15) or RH (n=16) without major persistent language impairments and sixteen age- and education-matched controls were recruited. They were all native speakers of Brazilian Portuguese with low SES. Participants underwent a short neuropsychological assessment and produced an oral description of sequential stories: The dog story (Hübner et al., 2019), The car accident (Joanette et al., 1995), and The cat story (Ulatowska et al., 1981). Each sample was transcribed and analyzed. Discourse measures included cohesion, global coherence, macropropositions, narrativity and lexical informativeness.

 

Results

A significant effect of group was found on the cohesion score (p = .001). Post-hoc comparisons showed that both the LH and RH groups had lower performances than controls. A significant group effect was also found on global coherence (p = .005) with post-hoc comparisons revealing that the RH group had a lower performance than the healthy controls. Similarly, a significant group effect on the macropropositions score was found (p = .040). Post-hoc comparisons showed that the RH group had a lower performance than controls.

Unsurprisingly, moderate to strong associations between the discourse measures themselves were found. Among all other variables, the strongest correlations with all five discourse variables were found with naming. For instance, moderate correlations have been found between naming and macropropositions (τ=.43, p<.001), narrativity (τ=.46, p<.001) and lexical informativeness (τ=.50, p<.001). 

 

Conclusions

These results underline the importance of conducting cognitive and language studies in LH and RH post-stroke patients to better specify the characteristics of connected speech. Furthermore, this research contributes to increase our knowledge of discourse production in lower SES populations which represent the majority in developing countries.

The role of relative frequency in the production of prepositional phrases in aphasia in Czech

ABSTRACT. Introduction

Usage-based construction grammar views language as a network of constructions (form-meaning pairings) shaped by individual linguistic experience (Diessel, 2019). Type, token, and relative frequency of use is seen as a crucial factor influencing both language representation and processing. This approach has been recently applied to the analysis of language in aphasia with very promising results (e.g. Gahl, 2002; Hatchard & Lieven, 2019), suggesting that it might have more explanatory power compared to traditional rule-based approaches. In this paper, I present an analysis of prepositional phrases in a corpus of connected speech of Czech speakers with aphasia. The corpus created by the author contains transcripts of conversational, narrative, descriptive, and procedural discourse elicited from 11 individuals with aphasia (mild to moderate, fluent and non-fluent).

 

Methods

A subcorpus of two picture descriptions and a story retelling task was used in the analysis. This subcorpus was also used to generate fluency profiles of the individual participants using mean length of utterance, number of disfluencies, and several other measures. All prepositional phrases (PP) produced with no disfluencies (filled or silent pauses, repetitions) were extracted which resulted in a total of 202 phrases. These PPs were analyzed using frequency data from a corpus of spoken Czech and a corpus of movie subtitles (bigram frequency of preposition and complement noun, cumulative frequency of the complement lemma, and relative frequency of the word forms of the complement).

 

Results

The analysis has shown that 10 of the 11 participants were able to produce at least some instances of PP without any disfluencies. A substantial number of these PPs expresses spatial relations and has similar frequency characteristics: the complement noun has a high relative frequency of occurrence in the grammatical case governed by the preposition. For example, the PP v kleci ‘in the cage’ was successfully produced even by participants with very low level of fluency. The form kleci ‘cage-locative’ is the most frequent word form of the corresponding lemma.

 

Conclusions

The results provide some support for the usage-based model of language representation and processing. High relative frequency and probability of occurrence reflects a higher level of entrenchment which requires fewer processing resources, resulting in a higher probability of success in production even in individuals with relatively low levels of fluency overall. The analysis also provides new insights on the manifestation of aphasia in Czech, which is an underrepresented language in aphasia research, and opens new avenues for a more focused therapy. I will also present the results of a comparison with instances produced with disfluencies. This analysis is still in progress. I predict that it will provide additional evidence showing that less frequent contexts are more prone to difficulties in language production.

The relationship between discourse efficiency, informativeness, and behaviors associated with lexical retrieval difficulty in people with mild anomic aphasia

ABSTRACT. Introduction

Discourse informativeness and efficiency are common targets of discourse analysis (Brisebois et al., 2020; Doyle et al., 1995; Leaman & Edmonds, 2019; Nicholas & Brookshire, 1993); however, the multi-dimensional nature of discourse can make it difficult to capture specific variables such as these without considering their interrelationship with other linguistic and cognitive functions in discourse production (Fromm et al., 2017; Marini et al., 2011; Wright & Capilouto, 2012). Behaviors such as false starts (t* t* table), filled pauses (uh, um) and silent pauses can be considered markers of difficulty in lexical retrieval or language planning in people with aphasia (Obermeyer et al., 2020; Whitney & Goldstein, 1989) and could impact discourse informativeness and efficiency. This project evaluated if measures indicating lexical retrieval difficulty predict informativeness and efficiency in the discourse of individuals with mild anomic aphasia.

 

Methods

This study utilized data from AphasiaBank (MacWhinney et al.,  2011). Participants included 26 individuals with anomic aphasia. The Average Aphasia Quotient from the Western Aphasia Battery-Revised was 86.81 (SD=4.66; range=78.3 to 93.4). The average participant age was 62.1 (SD=11.06) years at the time of testing and mean years of education was 14.92 (SD=2.65).

Discourse transcripts from the cat and tree (single picture), window (sequential picture) and umbrella (sequential picture) picture descriptions were compiled from AphasiaBank. Transcripts were coded for correct information units, or words that are accurate and relevant to the stimuli (CIUs, Nicholas & Brookshire, 1993) and complete utterances (Edmonds et al., 2009) which determine if the utterance is relevant (+REL) and contains subject-verb-object structure. For this study, only the +REL component was used. Percentages were calculated for both CIUs and +REL. These outcomes served as measures of informativeness/relevance. CIUs/min were calculated as a measure of efficiency. Percent pause time (total pauses 2 seconds or more/total transcript time), filled pauses (uh um), and false starts (t* t*) were utilized as measures indicating lexical retrieval difficulty. Transcript coding and reliability was completed by trained research assistants.

 

Results

Linear regression models with simultaneous entry of predictors were completed for each dependent variable (%CIUs, CIUs/Min, %REL). Predictors for all models included filled pauses, false starts, and percent pause time. The model for %CIUs was significant at p=.006 with false starts being a significant predictor (p=.004). In the second model CIUs/min was the dependent variable. This model was significant (p=.000) with false starts (p=.001) and percent pause time (p=.006) significantly contributing to the model.  The %REL model did not reach significance.

 

Conclusions

False starts were predictive of word level informativeness and efficiency (%CIUs, CIUs/min) in the discourse of people with mild anomic aphasia. Percent pause time was also predictive of efficiency (CIUs/min). We did not find a significant relationship between behavioral measures of lexical retrieval difficulty and the utterance level measure of relevance (%REL). These findings support a relationship between false starts and the ability to produce relevant discourse at the word level and provide more insight into how breakdowns in lexical retrieval can manifest in discourse.

The manifestation of pronoun use in Turkish non-fluent aphasia
PRESENTER: Aysenur Akyuz

ABSTRACT. Introduction

Pronouns have been shown to be impacted in speech production of persons with aphasia (PWA). Earlier spontaneous speech studies reflect a general trend across different languages by reporting omissions of pronominal elements (Fabbro, & Frau, 2001; Rossi & Bastiaanse, 2005). Some other studies showed that PWA overuse pronouns in languages with rich inflection paradigms, such as Swedish and Icelandic (Ahlsén, & Dravins, 1990, Magnúsdóttir, & Thráinsson, 1990). Difficulty with pronouns seems to be largely heterogeneous (Ishkhanyan et al., 2017; Martínez-Ferreiro et al., 2019; for reviews: Arslan et al., 2021; Menn et al., 1990). Turkish is a richly inflected language with a large case-marking paradigm, and it allows object and subject dropping. We know so little about the manifestation of pronoun use in Turkish aphasia. This study investigates the appraisal of pronoun variables in Turkish PWA’s spontaneous speech production.

 

Methods

Narratives from 10 PWA (4 females, aged 43-74) and 10 non-brain-damaged controls (2 females, aged 37-67) reported in Arslan et al. (2016) were used in the current study. The participants were asked to produce narratives based on a personal interview and picture description. For each participant, a 200-word speech sample was extracted and analyzed. The following pronoun variables were evaluated: personal, reflexive, demonstrative, indefinite, possessive, and interrogative as well as the number of pronoun droppings. Pronouns were tallied with regard to the case-marking used (nominative, accusative, dative, locative, ablative) in subsequent analysis.

 

Results

Table 1 presents the results from group comparisons for each pronoun variable examined. We found that Turkish-speaking PWA have an elevated number of pronouns, pronoun-to-noun, and pronoun-to-word ratio, but not total number of nouns. Production of the object and subject personal pronouns was found to be within the control norms. The PWA produced an increased number of pronoun dropping in both object and subject positions in comparison to the controls. The PWA produced a larger number of demonstrative and indefinite pronouns. Reflexive and interrogative pronouns were used very infrequently in both groups, however, PWA produced the former less than the controls. Further analysis showed that the PWA produced all case-marked pronouns within the control norms.

 

Conclusions

The results show that non-fluent aphasia in Turkish is manifested with overuse of pronouns, evidenced by an increased pronoun/noun ratio. A critical examination into these overuses shows that the PWA overused the so-called empty variables (i.e., demonstrative and indefinite pronouns). These suggest that Turkish PWA overuse pronouns as a strategy to avoid the retrieval of nouns with complex morphology as also evidenced in many languages with complex inflectional paradigms (see Menn et al., 1990). The overuse of pronouns in PWA speaking languages which allow pronoun-dropping is not uncommon (see e.g., Martínez-Ferreiro et al., 2019). Although Turkish allows for dropping of pronouns, the PWA’s uses of both object and subject dropping instances were above the control norms. This finding is consistent with general characteristics of non-fluent aphasia in Turkish with reduced complexity and length in utterances produced (Arslan et al., 2016).

MiRAR- Mixed reality in aphasia rehabilitation: Concept and development
PRESENTER: Rajath Shenoy

ABSTRACT. Background:

Aphasia, an impaired ability to use language for communication after a head injury, is a major impediment to the quality of life (QOL) of the affected individuals. Speech-language therapy (SLT), the primary means of intervention for aphasia, usually involves didactic interaction between the speech-language therapist and the client, often without regard to the real-life environments in which the communication occurs (Boles et al., 2004). Provision of SLT in natural environments, however, is beyond the scope of conventional, clinic-based intervention setups. In light of the technological advances [previous examples in virtual reality (e.g., EVA-Park) Marshall et al., 2020] the proposed approach is expected to make the PWA use their language in an ecologically valid and meaningful manner in the natural communication contexts.

Aims:

With the aid of mixed reality (MR: augmented + virtual realties (i.e., AR + VR)), we present the concept, development, and deployment of a social communicative approach to aphasia rehabilitation in a controlled manner, to facilitate communicative participation of PWA.

Methods and Material:

Team building: We constituted an interdisciplinary team with hired technical professionals from software development, 3D application development, immersive technology, and graphic designing.

The implementation of functional approach into the MR application was planned in different phases. In the first phase, the SLPs critically appraised and planned the requirements for the functional therapy approaches for PWA. This included preparation and identification of relevant communication scenarios in the culturally-relevant context. This was carried out with a qualitative study design (e-Delphi method). The participants were 10 experienced SLPs with a minimum of 3 years of experience in aphasia rehabilitation in India. They were recruited from the directory of Indian Speech and Hearing Association [(ISHA)- https://www.ishaindia.org.in/)].

Concept of MR application: The proposed plan was to use Microsoft HoloLens device for delivery of conventional script training with the MR experience for PWA. This was supported with device specification (portability, comfort and minimal risk of fatigue due to immersion), and literature evidence in stroke patients (nausea, fatigue). The MR application (immersive technology) would have a VR, AR, and a script component. The VR mode would provide social scenes and the AR mode would facilitate the SLP’s live interaction with the client. The script would be provided alongside with the VR and AR modes. An example and schematic diagram is provided in Figure 1.

 

Results:

In this report, we demonstrate our concept, work-flow, and development of communication therapy for PWA using immersive technology (Mixed reality).

The application consists of a monitoring admin panel (SLP admin panel as a Web-Page application) and Mixed reality application (for the clients with aphasia). The SLP will be able to guide the PWA who is undergoing the therapy with MR Glass. Further, the SLP can to control the scenario scripts (editor), sound, text, and display of written instructions for individual PWA [Script complexity based on Kaye et al., 2016]. Twenty communicative scenarios are considered based on social, cultural, and dialectic (standard) variations of Indian languages (Indian English, Kannada, Hindi, & Malayalam).

Conclusion:

Currently, the MR application has been successfully developed. In the next stage of this ongoing work, this application would be used for the training of PWA so that meaningful, ecologically valid, and socially useful rehabilitation could be provided in controlled environments.

Linguistic Analysis of Effortful Utterances in Spontaneous Conversations Between People With and Without Aphasia: Form, Content, and Use
PRESENTER: Marion Leaman

ABSTRACT. Introduction

People with aphasia (PWA) often experience moments of struggle during conversation. When this happens, interlocutors may interrupt the PWA (Beeke et al. 2007) or complete their sentences with guesses (Purves, 2009). Alternatively, the partner may provide time for the PWA to complete the utterance. Here, we explored the value of a non-time pressured conversational environment where PWA had the opportunity to complete their utterances. We analyzed effortful utterances defined as turns featuring pauses/filled pauses, using Bloom and Lahey’s (1978) “form, content, and use” framework.

Methods

Ten people with minimal/moderate aphasia held two conversations with two different people (usually SLPs). The partners allowed the PWA time to communicate ideas and to self-correct. The partners did not make guesses or suggest compensations. 8-12 minute samples were transcribed. The first author located every pause/filled pause of ≥ 2 seconds, and RAs confirmed each (100% agreement).

 

Effortful utterances where the PWA commented on the difficulty (e.g., “I know the word”) were coded as production comments. We analyzed the remaining utterances using the following procedure. To analyze form, we counted the number of words produced. To analyze content, we coded the semantic information communicated using Renoult et al.’s (2020) categories of semantic content: general facts, autobiographical facts, self-knowledge, and expression of repeated events. To analyze language use, we examined the discourse function achieved by each effortful utterance according to Eggins and Slade (1997). Each was classified as an opinion, statement or question. Then we classified each as to whether its function entailed new information (‘opening move’); expansion on the PWA’s previous move (‘continuing move’); or a reaction to the partner’s previous move (‘reacting/responding move’).

 

Some utterances contributed informative words (i.e., ‘contributory’), but were so short or unclear that they could not be classified with specificity for semantic and/or pragmatic content. However, some utterances made no informative contribution whatsoever, and were designated ‘non-informative’.

Reliability was conducted on 20% of the data, with these results: transcription, 91.0%; semantic coding, 82.1%; discourse function coding, 84.4%. Word productivity was tallied by RAs and double-checked by the authors.

 Results:

We identified 313 effortful utterances, with an average of 3.72 words produced per utterance. Production comments comprised 10.9% of the data; contributory utterances comprised 8.3% of the data; and non-informative utterances comprised 4.5% of the data. For semantic content, 41.5% of utterances contained autobiographical facts, 20.4% were general knowledge, 9.3% were self-knowledge, and 1.9% were repeated events. For discourse function, 38.9% of utterances were opening moves, 23.3% were continuing moves and 15.9% were reacting/responding moves.

 

Conclusions

Overall, self-completion of effortful utterances by the PWA resulted in communicatively meaningful information (i.e., production utterances, contributory utterances, and semantically and/or pragmatically classifiable utterances) for 95.5% of the data. The PWA contributed mostly autobiographical facts and general knowledge, with 40% of turns classified as opening moves in which they directed the trajectory of conversation. These results demonstrate that when PWA are provided additional time and an engaged listener, it is possible for them to express their ideas, thereby making an active contribution to conversation. 

ALEA: a norm-referenced protocol for the clinical analysis of spontaneous speech in Spanish

ABSTRACT. Introduction

Spontaneous speech analysis (SpSA) is commonly used in clinical practice for people with aphasia, as it allows clinicians to detect deficits which may otherwise be missed and provides a baseline for further assessment. However, SpSA methods often lack clinical applicability given their time-consuming nature. Moreover, there is currently no standardised method for SpSA available for use in Spanish. This presentation seeks to address these gaps by introducing the ALEA (Análisis del Lenguaje Espontáneo en Adultos), a novel comprehensive method for SpSA in Spanish.

 

Methods

The ALEA is made up of 9 indices targeting sentence (MLU, approximation, finiteness, grammaticality & subordination) and word level phenomena (paraphasias and neologisms, nº of nouns, verbs & incorrect verbs). These have been adapted from other SpSA methods, mainly the Quantitative Production Analysis (QPA; Saffran et al., 1989), the Analyse voor Spontane Taal bij Afasie (ASTA; Boxum et al., 2013), to ensure reliability for use with a range of Spanish-speaking adult healthy and clinical populations, including those with mild aphasia. Semi-spontaneous speech samples were recorded and transcribed following the ALEA guidelines: https://lenguajespontaneo.cl/

 

Results

The results of 119 Spanish-speaking healthy volunteers are presented here providing a norm-referenced sample (Table 1). Non-parametric tests showed significant differences on a number of indices as a result of demographic variables such as age, educational attainment and gender, however multiple regression analyses suggested that these variables had low explanatory power. Cut-off points for preliminary clinical use were calculated at the 5th and 95th percentile. Clinical data from post-stroke aphasia (n=15), dementia (n=15), tumors (n=12) and vascular malformations (n=5) confirms the potential of the ALEA as a clinical screening tool.

 

Conclusions

The ALEA is a reliable tool for use with Spanish-speaking adult populations. The main strength of the ALEA is its controlled length and easiness of administration which favors its implementation in the clinical practice. The number of indices is kept to a minimum to provide a first screening of the speech output of different groups of adults, although additional research is needed to validate the specificity and sensitivity of this method in the above-mentioned clinical populations.

Relation of executive functions and performance in conversation among people with aphasia

ABSTRACT. Introduction

Executive functions (EF) have been found to be associated with different levels of language processing, such as word production (e.g., Martin & Allen, 2008), sentence comprehension (see review in Key-DeLyria & Altmann, 2016), and functional communication (Fridriksson et al., 2006) measured by the American Speech-Language-Hearing Association Functional Assessment of Communication Skills for Adults (ASHA FACS; Frattali et al., 1995). Its importance in engaging in a conversation has been discussed in a single case report by Frankel et al. (2007) via detailed examination into the relation between deficits in EF and difficulties in conversational repair of the participant with both cognitive and language impairments. However, empirical evidence is lacking in demonstrating the relation between EF and functional communication in conversation of PWA. The current study aimed to fill this research gap by examining the association of EF and functional communication in a conversational context. It is hypothesized that EF would significantly predict performance of information exchange of PWA during conversation.

Method

Forty-seven Cantonese-speaking PWA participated in the study. Their performance on various cognitive tests evaluating EF, attention, and verbal short-term/working memory was analyzed using principal component analysis, resulting in two cognitive factors reflecting PWA’s EF, and attention and memory, as reported in Wong and Law (2020). Their ability in functional communication was estimated by calculating the number of main concepts narrated by the PWA in three story probes based on two comic strips and one short video to the communication partners who had no prior knowledge of their content. Since the ability to comprehend sentences might affect PWA’s success in information exchange during conversation, their performance in such aspect was assessed via a Cantonese sentence comprehension screener (Law & Leung, 1998).  The above assessments were taken twice with three weeks apart. Correlation among the cognitive and linguistic variables were calculated before being inputted into hierarchical regression analysis. The averaged scores of the story probes across the two assessments served as the predicted variable, and the two cognitive factors and sentence comprehension as predictors.

Results

The three predictors were significantly correlated with scores of the averaged story probes with p values < .01. Results of hierarchical regression are shown in the table below. Both sentence comprehension and EF significantly predicted average performance on story probes produced by PWA, together accounting for 68% of the variance.

Conclusions

The results confirmed our hypothesis about the role of EF in functional communication of PWA. This study is also among the first reports providing empirical evidence for the association between EF and conversation of PWA. Such finding highlights the importance of detailed cognitive assessment of PWA in the management process. PWA and their communication partners should be better informed about the nature of communication difficulties. Further studies to identify effective strategies for both parties to cope with cognitive-linguistic impairments and breakdown in conversation are warranted.

Connected Speech Characteristics of Bengali Speakers with Alzheimer’s Disease: Evidence for Language-specific Diagnostic Markers
PRESENTER: Arpita Bose

ABSTRACT. Introduction & Objective

Speech and language characteristics of connected speech provide a valuable tool for identifying, diagnosing and monitoring progress in Alzheimer’s Disease (AD). However, our knowledge of linguistic features of connected speech in AD is primarily derived from English speakers; very little is known regarding patterns of linguistic deficits in speakers of other languages, such as Bengali. Bengali is a pro-drop, Indo-Aryan language with highly inflectional and complex morphosyntactic properties, and is structurally distinct from English. Given that the expected growth in neurodegenerative diseases will be from low- and middle-income countries where English is not the primary language, it is imperative to document, characterize and analyze the linguistic features of connected speech in languages native to these regions. The aim of this study was to characterize connected speech production and identify linguistic features affected in Bengali speakers with AD.

 

Methods

Participants were six Bengali speaking AD patients and eight matched controls from the urban metropolis, Kolkata, India. Narrative samples were elicited in Bengali using the Frog Story. Samples were analyzed using the Quantitative Production Analysis (Rochon et al., 2000) and the Correct Information Unit (Nicholas & Brookshire, 1993) analyses frameworks to quantify six different aspects of speech production: speech rate, structural and syntactic measures, lexical measures, morphological and inflectional measures, semantic measures and measure of spontaneity and fluency disruptions.

 

Results

 

In line with the extant literature from English speakers, the Bengali AD participants demonstrated decreased speech rate, simplicity of sentence forms and structures, and reduced semantic content ( Sajjadi et al., 2012; Slegers et al., 2018). Critically, differences with English speakers’ literature emerged in the domains of Bengali specific linguistic features, such as the pro-drop nature of Bengali and its inflectional properties of nominal and verbal systems. Bengali AD participants produced fewer pronouns, which is in direct contrast with overuse of increase in pronouns by English AD participants (e.g., Ahmed et al., 2012; Fraser et al., 2016). Despite Bengali being a highly inflected language, the results showed no difficulty in producing nominal and verbal inflections, without any obvious errors. However, differences in the type of noun inflections were evident, characterized by simpler inflectional features used by AD speakers.

 

Conclusions & Implications

 

This study represents the first of its kind to characterize connected speech production in Bengali AD participants. The profile is one of semantic difficulties, alongside key differences in grammaticality of production, characterized by the choice of simpler and operationally less demanding options. Language-specific differences from English emerged in Bengali and was characterized by the use of fewer pronouns and fewer reduplications, similar level of noun and verb inflections, but opting for simpler inflections. This study is a significant step forward toward highlighting the importance of developing language specific linguistic markers for AD and provides a framework for cross-linguistic comparisons across structurally distinct and under-explored languages.

The Construct of Stance as a Unifying Framework to Understand the Communicative Functionality of Narrators and Co-Narrators with Aphasia in Conversational Settings

ABSTRACT. Introduction

The ubiquity of personal narration in everyday life (Bruner, 2002; Fludernik, 1996; Norrick, 2000; Ochs & Capps, 2001; Quasthof & Becker, 2005) has catalyzed lines of research on the communicative functionality of narrators with aphasia. Past research on elicited personal narration of people with aphasia (e.g. Olness, Matteson, & Stewart, 2010; Olness & Ulatowska, 2011) serves as an entrée into emerging lines of research on spontaneous narration and co-narration among people with aphasia in conversational settings, which are represented by the present study.

The history of aphasiology has established a long and fruitful tradition of breaking new scientific ground with phenomenological case studies that are rigorously framed theoretically; the present study follows in that tradition.  Specifically, data from a case of an aphasia-group session that displays multiple exemplars of spontaneously occurring, conversationally-integrated personal narration and co-narration are analyzed. Analytic methods are derived from converging, theoretical models that are relevant to conversational narration and its pragmatic underpinnings:  models of stance (Keisanen & Kärkkäinen, 2014; Dubois, 2007), stance intersubjectivity (DuBois, 2007, 2014), linguistic evaluative devices (Labov, 1972; Martin & White, 2005), and the contrast between pragmatic modalizing/emotive and referential communicative functions (Nespoulous, Code, Virbel & Lecours, 1998; Olness & Ulatowska, 2020). Complementary constructs of relevance include: discourse typology (Esser, 2014; Longacre, 1996), footing (Goffman, 1981), multi-modality communication (Goodwin, 2003); language as a form of cooperative activity (Goodwin, 2013; Lerner, 2002); and contextual relativity of narrative coherence (Hyvärinen, Hydén, Saarenheimo, & Tamboukou, 2010).      

 

Methods

Data: Video-recorded, orthographically transcribed, 45-minute session of an aphasia group specifically designed to engage group members in “dynamic, naturalistic conversation” on topics that “shift(ed) in response to current events, member interests, or spontaneous comments and opinions” similar to the group design described by Garrett, Staltari & Moir (2007, p. 164). The group served as a clinical training venue for graduate students in speech-language pathology.  Conversational participants: Seated around a common table; six adults with aphasia (five male, one female; among them mild to moderate aphasia; non-fluent and fluent aphasia types); four female student clinicians.

 

Results

At least 18 primary-teller narratives, of a variety of lengths, embedded in the conversation; each co-narrated verbally and non-verbally by others. The estimated total time spent in narration, 62%. Use of evaluative devices, semantic paraphrase, and syntactic parallelism across conversational turns reflected stance resonance. Chains of stories on a thematic topic reflected parallel stance (evaluative content), e.g. young age at the time of first employment.  Conversational turns consisted of verbal, prosodic, and gestural moves, and combinations there, across all conversationalists.

 

Conclusions

The field of aphasiology has faced an ongoing challenge to reconcile the seemingly tenuous relationship between clinical linguistic impairment and naturally contextualized communicative functionality of individuals with aphasia. The present study provides one portal into the larger field of potential theoretical and phenomenological solution sets that may address this challenge: an instrumental case study that provides theoretical inroads centered on the construct of stance to advance the study of conversational narration and co-narration by and among people who have aphasia.

Session 19 (permanent): Poster session

Tuesday, 3.15pm-4.45pm: Phonology; Orthography; Comprehension; Pragmatics

Visual Influences on Auditory Processing in Noise in Aphasia
PRESENTER: Anastasia Raymer

ABSTRACT. Introduction: Individuals with relatively mild aphasic auditory comprehension impairments experience inordinate difficulty listening to speech in degraded listening conditions. In addition to difficulty with increased speech rate, reduced response times, and accents, the presence of background noise poses considerable difficulty for individuals with aphasia as noise levels increase (Healy et al., 2007; Kittredge et al., 2006). In degraded listening conditions, the visual modality becomes especially important for facilitating auditory processing (Jesse & Janse, 2012).  Little information is available about the benefits of visual information for auditory processing in noise for individuals with aphasia.  The purpose of this project was to examine the influence of increased noise and visual information for auditory processing in individuals with aphasia.

 

Methods: Participants included seven right-handed adults with chronic aphasia following left hemisphere stroke. Western Aphasia Battery-Revised (Kertesz, 2007) scores surpassed 7/10 in auditory comprehension subtests, suggesting relatively mild impairments. We also tested five individuals with no history of stroke. Hearing was within normal levels for all but two participants with unilateral high frequency hearing loss (See Table 1). All provided written informed consent to participate in this study.

           

Participants completed the Quick Speech in Noise (QSIN, Killion et al., 2004), a standardized audiological measure requiring sentence repetition (IEEE unpredictable sentences). In the standard auditory-only (AUD) condition, participants heard sentences spoken through headphones as signal-to-noise ratio (SNR) varied from 20-0 dB in five sentence blocks. In the experimental auditory+visual (AV) condition, participants could hear and see the speaker on a monitor.  As the participants repeated sentences, the examiner marked each sentence for five key words. We calculated the number of key words repeated correctly across five SNR levels for AUD and AV conditions (max score = 40 per SNR).  Distortions due to apraxia of speech were accepted as correct responses.

 

Results: Results are depicted in Figure 1. As expected, the aphasia group performed significantly lower than the controls across SNR levels (F=8.37, p=.01, partial h2=.46), with performance declining as SNR approached 0-5 dB. Both groups showed a modality advantage in that performance in AV was significantly greater than AUD (F=66.92, p=.00, partial h2=.87). Calculating the visual advantage (AUD-AV) across SNR levels, both groups showed similar levels except for SNR 0 where a significant between groups difference occurred (t=2.36, p=.04). In the AV condition, the control group experienced a 10.60 point advantage compared to the 2.57 point advantage for the aphasia group.

 

Conclusions: As expected, individuals with aphasia demonstrated considerable loss of information as SNR levels decreased (noise levels increased).  The aphasia group performance faltered at SNR 5 dB whereas the controls declined at SNR 0 dB.  At the most difficult noise level (0 dB), the aphasia group experience considerably less benefit from visual information than the control group. These findings suggest that the use of visual strategies to enhance auditory processing in degraded conditions may not be as effective as expected for individuals with aphasia and different compensatory strategies or intervention may be needed to engage visual information to support auditory processing.

The Processing Mechanism of Categorical Perception of Lexical Tones in Chinese Speakers with Poststroke Aphasia
PRESENTER: Zhang Wei

ABSTRACT. Objective: Compared to the extensive research on categorical perception (CP) in adults, the processing mechanism of CP of lexical tones in Chinese speakers with poststroke aphasia has received little attention. This study aims to provide behavioral and neural evidence by examining CP of lexical tones in Chinese speakers with Wernicke’s poststroke aphasia.

 

Design: 2 Wernicke’s poststroke aphasic patients and an equal number of controls matched in age, gender and education participated in behavioral and event-related potential (ERP) experiments. Sampled at 44.1 kHz and digitized at 16 bits, the Chinese monosyllable /da/ was recorded with respective T1 and T2 in a sound-attenuated room by a native female speaker from northern Mainland China. The two lexical tones were normalized to a sound pressure level of 70dB and a duration of 200 ms using the praat software. In addition, the Mandarin T1-T2 continuum was manipulated by applying the pitch-synchronous overlap and added function via praat. 9 stimulus sets were created spanning the continuum with an equalized acoustic interval between each step. Prototypically, the first stimulus (S1) referred to T1 and the last stimulus (S9) signaled T2. The non-speech stimuli were resynthesized pure tones, with exactly the same pitch, intensity, and duration as the speech stimuli. The ERP experiment adopted a multifeatured passive oddball paradigm. 15 standard stimuli were played first to prompt subjects to establish a standard perceptual template. Then, 1000 stimuli (800 standards and 200 deviants) were played binaurally. The number of each type of deviant was 100. The deviants were repeated pseudo-randomly with any 2 adjacent deviants separated by at least 3 standards. The speech stimuli were set into one block and the non-speech stimuli were contained in another block. Two blocks were presented in a counterbalanced sequence.

 

Results: In identification and discrimination task, although the perceptual boundary positions and ability to discriminate between-category tone pairs were not significantly different between two groups, the boundary width values and within-category discrimination accuracies differed significantly between two groups. The boundary width increased significantly and the within-category discrimination accuracies decreased significantly in the aphasic group. The ERP results echoed the above behavioral performance. The MMN mean amplitude evoked by within-category deviants was significantly smaller for aphasic patients than healthy controls regardless of speech or non-speech condition. This implies the weakened processing of acoustic information by aphasic patients without the need of focused attention, as the detection of subtle acoustic nuances of pitch was measurably decreased. In addition, the MMN peak latency elicited by across-category deviants was significantly shorter than that by within-category deviants for both groups, indicative of the earlier processing of phonological information than acoustic information of lexical tones at the pre-attentive stage, and cortical plasticity can still be induced in aphasic patients.

 

Conclusion: Findings suggest reduced sensitivity to between-category information but preserved categorical perception of lexical tones in Chinese speakers with Wernicke’s poststroke aphasia.

Fronto-central connectivity discriminates successful from unsuccessful phoneme perception in Wernicke’s aphasia
PRESENTER: Tina M D Mello

ABSTRACT. Background: Impaired speech perception is a core symptom of Wernicke’s-type aphasia (WA). It is thought to be causally linked to the language comprehension impairment and is a key target in impairment-based neurorehabilitation (e.g. Woodhead et al., 2017).  Speech perception impairments manifest as phonological discrimination difficulties. However, accurate discrimination can be observed when phonological changes are sufficiently acoustically distinct (Robson et al., 2014). At the neural level, perception success is not always discriminated by magnitude of neural activity in either the aphasia or the neurotypical population (Robson et al., 2014; Sharma et al., 1993). This study tested the hypothesis that phonological perception success is reflected in inter-regional connectivity within the speech perception network, observable in scalp-level connectivity measures.

 

Method: Data from seven WA and seven neurotypical participants from Robson et al., 2014 were re-analysed. EEG was recorded while participants listened to a multiple deviant mismatch negativity (MMN) paradigm while watching a silent film. Participants heard standard and deviant CVC nonword stimuli. Deviant stimuli were either perceptible or non-perceptible changes from the standard, based on prior behavioural testing. Deviant stimuli were additionally presented in a standard “deviant alone” condition.

EEG data were preprocessed and the imaginary coherence, a measure of functional connectivity, (IC: Nolte et al., 2004) between all sensor pairs was calculated in the theta, alpha, beta and gamma bands for three time windows: (1) pre-MMN (-100-100ms); (2) MMN (100-300ms) and (3) post-MMN (sensors 300-500ms).  IC MMN change was calculated as the difference in IC during the MMN window in comparison to the deviant-alone stimuli and the surrounding pre and post time windows.

The IC MMN change between the peak ERP MMN response electrodes (FCz, Cz) and 9 sensor sub-sets was isolated and averaged and subjected to inferential statistics.  

 

Results: 2x2x3x3 ANOVA (group x perceptibility x anterior-posterior sets x left-right sets) found a main effect of perceptibility for IC MMN change in the theta band (F(1,1)=7.7, p=0.02). Due to low group numbers, follow-up Wilcoxon test explored theta IC MMN change over each sensor sub-set. A significant perceptibility difference was found for IC between central and right anterior sensors for the WA group (Z=-2.2, p=0.03), corresponding to peak ERP activity.  

 

Conclusions: In-line with previous findings, theta band connectivity was associated with MMN auditory change detection (Hsaio et al., 2009). Theta connectivity distinguished phoneme perception success, irrespective of aphasia status or stimulus type. Theta oscillations may support integration of phonological information throughout the residual speech perception and attention networks. These results concord with previous findings that functional connectivity between residual network components better accounts for behaviour in aphasia than response magnitude within local neuron populations (e.g. Baldassarre et al., 2019) and could be used to evaluate the outcome of therapy research. Future analyses will use permutation testing to further explore significance at the group and case series levels.

Relationship between working memory and temporal information processing in individuals with aphasia
PRESENTER: Mateusz Choinski

ABSTRACT. Objective

Aphasia is usually accompanied by deficits in non-linguistic cognitive functions, i.e., executive functions, attention and working memory (WM), as well as temporal information processing (TIP) in millisecond range.

The aim of the present study was to investigate the efficiency of verbal working memory (VWM) and spatial working memory (SWM) in aphasic subjects in the relation to the severity of language impairment and to the efficiency of TIP.

Participants

Thirty right-handed subjects (20 male) suffering from post-stroke aphasia after haemorrhage or infarction (lesion age: M = 51 weeks; ± SD = ±55 weeks) participated in the study. They were at the age from 27 to 82 years (M = 59 years; ± SD = ±14 years).

Methods

Two tests for assessing VWM and SWM were administered: (1) receptive verbal test and, (2) the Corsi Block-Tapping Test. Both these tests applied forward (addressing maintenance processes, i.e., storing, monitoring, and matching information) and backward (addressing manipulation beside maintenance processes, i.e., reordering and updating information) versions. Auditory Comprehension Index (ACI) was calculated based on applied speech reception tests. TIP efficiency was measured using the ability of temporal ordering in millisecond range for auditory stimuli.

Results

For VWM, both forward and backward tasks correlated with ACI and efficiency of TIP. In contrast, for SWM task such correlations were significant for the backward version only.  Moreover, partial correlation analysis controlling for ACI revealed that correlations between TIP and SWM backward indices remained significant, while those for VWM (both forward and backward) became nonsignificant.

Conclusions

The results indicated that the level of verbal competency appears to play an important role in both VWM tasks, whereas TIP (which is associated with manipulation processes) appeared to be important for SWM, but only on the backward task.

Aphasia in the Bengali language: Excerpts from the Kolkata Aphasia Study
PRESENTER: Durjoy Lahiri

ABSTRACT. Introduction: The knowledge of aphasia incidence and profile in different languages is important towards understanding the cross-linguistic diversity in brain representation of language. Here we attempt to elaborate various aspects of aphasia in speakers of Bengali language from eastern India. In addition to usual aphasia incidence, symptomatology, severity and recovery, our study encompassed crossed aphasia and lesion aphasia discordance as well.

Methods: Between 2016 and 2018, cases of aphasia following first-ever stroke were collected from a tertiary care stroke unit in Kolkata (India). Bengali version of Western Aphasia Battery was used for language assessment in study participants. Thorough demographic data of each recruited patient was recorded that included age, gender, handedness, bilingual status, and educational background. Lesion localization was done by using magnetic resonance imaging (3T) for ischemic stroke (if not contraindicated) and computed tomography for hemorrhagic stroke. Among 515 screened cases of first-ever acute stroke, 208 presented with aphasia. Language assessment was done between 7 and 14 days in all study participants and was repeated between 90 and 100 days in the available patients for follow-up. Appropriate statistical tests were used for analysis of the collected data.

Notable idiosyncratic features of Bengali language are, by and large, related to its vowel duration as well as intonational pattern. Bengali uses a phonological writing system—a so-called abugida—whereby vowels are represented as diacritics rather than independent letters. Bengali is written from left to right and lacks distinct letter cases.

Results: The incidence of post-stroke aphasia in our sample was 40.39% with Broca’s aphasia being the commonest type followed by global aphasia. Higher education was found to be an independent predictor of fluent aphasia occurrence. Majority (78.8%) of the participants showed very severe aphasia while the independent determinants of severity were hemorrhagic stroke, higher lesion volume and non-fluent aphasia. Bilingualism was observed as a protecting factor as monolingual participants showed higher initial severity of post-stroke aphasia. The most important determinant of aphasia recovery in our sample was initial aphasia severity followed by type of aphasia. Lesion-aphasia discordance was observed in 14.92% of participants with aphasia and discordance favored non-fluent aphasia type. Patients with hemorrhagic stroke, posterior peri-sylvian lesion and higher education were found more likely to display lesion-discordant aphasia. Recovery was also found to be better in lesion-discordant aphasia. Crossed aphasia incidence was found higher (6.73%) in Bengali-speakers than in other languages and the profile of crossed aphasia favored non-fluent type despite wide variation in lesion location.

Conclusion: To our knowledge, this is a first-time documentation of the epidemiological aspects of post-stroke aphasia in Bengali-speakers. An attempt was made to elaborate aphasia in the Bengali language in its totality (including uncrossed & crossed aphasias and lesion-aphasia discordance) which might help future studies to explore its representation in healthy brain.

Defining hypoperfusion in chronic aphasia: an individualized thresholding approach
PRESENTER: Noelle Abbott

ABSTRACT. Introduction

Individuals with chronic aphasia (IWA) exhibit variable patterns of language impairment, which makes it difficult to identify structure-function brain relationships[1,2]. This variability may be due to underlying alterations in brain function. Prior research has demonstrated that IWA have reduced cerebral blood flow (CBF; hypoperfusion) in areas of the brain that are structurally intact[3,4]. However, across these studies there is little consensus on how to best define hypoperfusion. Though standard CBF threshold values exist (healthy≥50mL/100g/min, hypoperfused=12-20mL/100g/min, necrotic≤12mL/100g/min), they do not fully capture tissue functionality in IWA[5-7]. Further, group-level analyses may overshadow important individual differences. In this exploratory study, we defined an individualized metric for hypoperfusion and used it (vs. standard approaches) to investigate (1) when perilesional tissue (often functionally compromised) returned to “normal” CBF levels and (2) how well our metric correlated with auditory comprehension.

 

Methods

Participants included 6 monolingual, right-handed (premorbid), chronic (>1 year) IWA who had a single, unilateral, left hemisphere stroke. Aphasia subtype and severity were based on the Boston Diagnostic Aphasia Examination-3 and the Western Aphasia Battery-Revised[8-9]. Auditory comprehension was measured through the comprehension subtests of these assessments.

 

Neuroimaging Procedures: Anatomical and resting state CBF data were acquired using a 3T-GE scanner (pre-processing information can be found in Abbott et al. 2021[10]). All scans were co-registered and labeled using the Automated Anatomical Labeling Atlas[11]. To systematically define perilesional tissue, we created four 3mm perilesional bands (0-3mm, 3-6mm, 6-9mm, 9-12mm).

 

Analyses: Group- and individual-level analyses were performed to demonstrate the importance of an individualized approach. Here, we defined “normal” brain tissue based on each participant’s right hemisphere average CBF (CBFRH) and “functionally compromised” tissue as anything less than 1.5 standard deviations below CBFRH. Hypoperfusion in LH-perilesional bands and specific regions of interest (ROIs) were identified to explore the relationship between hypoperfusion and language behavior.

 

Results

Our individualized approach was more sensitive to differences in tissue functionality for each participant. While the group-level analysis showed no difference in the 0-3mm band from the calculated hypoperfusion threshold (t(5)=-1.18,p=0.15), individual-level analyses revealed additional information; there were differences if and when CBF values returned to “normal” in the remaining three bands.

Our individualized approach also picked up on hypoperfusion in ROIs that remained structurally intact, suggesting that our metric is more sensitive to individual patterns of brain function. Correlations between the two CBF metrics (standard/individual) and language behavior revealed a correlation between auditory comprehension and multiple temporal regions, which did not exist with standard thresholding. These results suggests that our individualized metric may better identify functionally compromised tissue on an individual basis.

 

Conclusions

We propose a new approach for measuring functionally compromised brain tissue in IWA. Standard cut-off values and group-level analyses often over- or under-estimate tissue functionality in IWA. These results underscore the necessity of considering not just the structural integrity of brain regions but also the functional integrity when investigating structure-function relationships. By adding in measures of functional integrity, researchers may be able to better account for some of the variability demonstrated by IWA.

Orthographic and lexical effects in Neglect Dyslexia: evidence from prefixation
PRESENTER: Bianca Franzoia

ABSTRACT. Introduction:

Information on lexical representation and processing can be obtained by observing how attention and lexical access interact in Neglect Dyslexia (ND). Spared morpho-lexical knowledge has been shown, indeed, to modulate the exploration of written material in ND (Semenza et. al, 2011; Reznick & Friedmann, 2015). The present study specifically aims at investigating whether and how morpho-lexical variables may modulate reading of prefixed words.

 

Methods:

Patient ZE, 61 y.o., suffered a tumour lesion in the right posterior temporal lobe. He showed a left hemispatial neglect (BIT conventional: 40/146); additionally, clinical assessment and BIT behavioural (52/81) revealed ND. He was administered 210 prefixed Nouns (N) and 105 Past Participles (P) to read aloud. “Root boundedness” (bound vs. free) and “semantic transparency” (transparent vs. opaque) were considered. Nouns were thus divided in four types: Bound Opaque (BO: antipatia-antipathy), Pseudo-prefixed (PP: antichità-antiquity), Free Transparent (FT: antivirus-antivirus) and Prefixed Non-Words (NW: antimento-antichin). Participles types were: Bound Transparent (BT: condensato-condensed), Pseudo-prefixed (PP: continuato-continued), Free Opaque (FO: concentrato-concentrated), and Prefixed Non-Words (NW: conpiovuto-conrained).

Word length, word frequency, type of prefix and prefix frequency were matched across categories. Stimuli were administered singularly in random order at the center of a monitor screen (80 pt.), with no time constraints.

 

Results:

Confirming his ND diagnosis, ZE made, overall, a much higher number of errors on the left (96%) as compared to the right side (9%). Left sided errors were classified as either morphological, when respecting prefix-root boundary (e.g., omission/substitution of prefix), or as other when they did not respect it (e.g., partial prefix omission/omission beyond prefix…). 

An overwhelming (χ 2 = 18.189, p< 0.001) prevalence of morphological over other errors was observed. Significant differences in distribution of errors across categories were however found (χ 2= 15.075, p<0.05): words likely represented as whole-units (i.e., PP, and, to a lesser extent, BO) showed a lower proportion of morphological errors. In contrast, words likely stored as parsed (FT) or those lacking a lexical entry (NW), showed the higher rates of morphological errors.

 

Conclusions:

These results provide evidence that attention to written material is modulated by lexical information and not just by orthographic information. Complex words are thought to engage two different stages in reading (Rastle & Davis, 2008). A pre-lexical morpho-orthographic segmentation, based solely on the analysis of orthography, would characterize the earliest stages of visual word perception. If attention is modulated just at this level, the effects of ND would have equally affected all categories of prefixed/pseudo-prefixed words and non-words. Morpho-semantic decomposition would characterize later linguistic processing. If attention to written material is, in addition, modulated at this later level, the effects of ND would influence the patient’s performance in different word categories unequally: the leftward portion of words that are not decomposed, like PP, or less likely to be decomposed, like BO, would be less easily dropped.

These results, by showing to what extent ND is sensitive to lexical factors engaged in higher-level processing of prefixed words, highlight the complex nature of this disturbance.

 

Prefixation in a case of deep dyslexia and neglect.
PRESENTER: Carlo Semenza

ABSTRACT. Introduction:

Neuropsychological investigations on lexical morphology using reading aloud as a main task have been conducted in cases of deep dyslexia/phonological (DD), a result of damage to the left hemisphere, and of neglect dyslexia (ND), mostly resulting from damage to the right hemisphere. Both conditions reflect the mental organization of complex words. In DD reading is possible only via the lexicon; errors of morphological nature derive from the fact that complex words may be stored as decomposed in root and affixes (Patterson, 1980); affixes are thus prone to omission or substitution with other affixes. In ND the left side of words is ignored: morphological boundaries, rather than merely spatial factors have been shown to modulate reading (Reznick and Friedmann, 2015; Semenza et al. 2011). The two conditions, resulting from damage to different hemispheres, are unlikely to coexist in the same case. The combination of the two has never been reported in neuropsychological studies of morphology. Thus, Case DE, presenting with an unusual combination of DD and ND allows interesting observations about the processing of complex words. Prefixed words were used in this investigation because they have morphological elements at both the left and the right ending.

 

Methods:

Patient DE, 63 y.o., was affected by fronto-temporal dementia (MMSE: 13/30; MoCA: 9,11/30). A left hemi-spatial neglect (BIT: 47/146) including ND was shown. In reading non-prefixed words, prevalence of left-sided errors emerged.

A clear pattern of DD was additionally observed (words better than non-words, morphological and semantic errors).

DE was administered 210 prefixed Nouns (N) and 105 Past Participles (P) to read aloud. “Root boundedness” (bound vs. free) and “semantic transparency” (transparent vs. opaque) were considered. Nouns were thus divided in four types: Bound Opaque (BO: antipatia-antipathy), Pseudo-prefixed (PP: antichità-antiquity), Free Transparent (FT: antivirus-antivirus) and Prefixed Non-Words (NW: antimento-antichin). Participles types were: Bound Transparent (BT: condensato-condensed), Pseudo-prefixed (PP: continuato-continued), Free Opaque (FO: concentrato-concentrated), and Prefixed Non-Words (NW: conpiovuto-conrained).

Word length, word frequency, type of prefix and prefix frequency were matched across categories. Stimuli were administered singularly in random order at the center of a monitor screen (80 pt.), with no time constraints.

 

Results:

On prefixed words, DE committed about as many errors on the left as on the right side (χ 2 = 2.712, p= 0.099). The majority of errors were classified as morphological (prefix/suffix omissions/substitutions). Importantly, errors distributed unequally between the right and the left side across categories (χ 2= 44.626, p<0.001). Words likely represented as whole-units (i.e., PP, BO, BT, FO) showed a higher proportion of right-sided errors, relatively saving the prefixes. In contrast, words likely stored as parsed (FT) or those lacking a lexical entry (NW), showed higher rates of left sided errors.

 

 

Conclusions:

These results provide striking evidence that attention to written material is modulated by lexical information. DD would enhance the likelihood of committing morphological errors on words whose internal representations are likely to be stored as decomposed. Prefixes of these words seem to be more sensitive to the effects of ND.  

Vowel Dysgraphia
PRESENTER: Maya Yachini

ABSTRACT. Introduction

This research describes a new type of developmental dysgraphia, vowel dysgraphia, characterized by a selective deficit in vowel writing in the sublexical route. Cotelli et al. (2003) and Cubelli (1991) reported of three individuals who showed difficulty in writing vowels that was ascribed to an orthographic-output-buffer deficit. Developmental vowel dysgraphia has not been previously reported, neither was a selective vowel deficit in the sublexical route.  

 

Methods

We examined the writing of 427 Hebrew-speakers without history of brain damage who we diagnosed as having dysgraphia based on their total error rates in writing words and nonwords. We used the TILTAN writing screening test (Friedmann et al., 2007) to explore the type of dysgraphia each participant had.  The participants who were diagnosed with vowel dysgraphia participated in a further line of tests designed to assess the characteristics of this dysgraphia and its locus in the spelling model. These tests included input and output modalities – writing to dictation, written naming, oral spelling, typing, and spontaneous writing, of words and nonwords, and words with various characteristics with respect to vowels and consonants.

 

Results

According to the rate of vowel errors that was significantly larger than that of the age-matched control groups and larger than 10% of the target vowels (control groups N=741) we identified 30 participants with a selective difficulty in vowel writing. The error types in the participants’ writing involved omissions, additions, transpositions, and substitutions of vowels. They made these errors only or almost only in vowels, and not in consonants. We analyzed their error patterns and the effects that influenced their writing, and found that they showed no per-letter length effect, which rules out the possibility that the impairment is in the buffer. Also, their vowel deficit manifested itself only in nonwords or, in case they also had surface dysgraphia, in words and nonwords. This pointed to an impairment in the sublexical route, in the conversion of vowels into vowel-letters.  Most of the participants had more vowel errors in the root than in the morphological affix (Wilcoxon z = 3.28, p = .001). These results indicate that the sublexical writing route includes separate routes for phoneme-grapheme conversion and for the conversion of morphological affixes from their phonological to their orthographic representation.  In addition to the 30 participants reported above, we also identified a further group of 27 participants with vowel dysgraphia who made predominantly vowel omissions in writing (which were not attributable to surface dyslexia).

 

Conclusions

These findings cast new light on vowel errors in writing, which until today were ascribed to a deficit in the orthographic output buffer. We concluded that the impairment underlying vowel dysgraphia is a selective deficit in the sublexical route that affects the conversion of vowel phonemes to graphemes.  The results also indicate that there is a separate module in the sub-lexical route for morphological conversion.  The findings of the study have theoretical implications for the dual-route model for writing, as well as for treatment.

Word Recognition and Reading following Temporo-frontal and Basal Ganglia Lesion: A Case Report

ABSTRACT. Introduction

Joseph-Jules Dejerine’s seminal research on alexia with agraphia (1891) and pure alexia (1892) was based on symptom-lesion mapping in two patients. Six decades later, Soviet neuropsychology and cognitive neuropsychology (Coltheart, Patterson, & Marshall, 1980) began to offer newer conceptual models of reading/alexia (eg. Dual-route model) that accounted for best known observations related to cases with alexia. The rise of Cognitive Neuroscience and Neurobiology of Language has enriched the conceptual and neural bases of reading (Baldo et al, 2917; Binder et al, 2003; Hillis & Tuffiash, 2002; Hoffman, Lambon Ralph,& Woollams, 2015;  Pillai et al, 2017). Price’s (2012) review of the neuroimaging studies (PET and fMRI) of language, in particular reading, highlighted conceptual and methodological developments. Neuroimaging studies have largely supported the dual-route model’s validity (Jobard et al, 2003) with notable exceptions (Ripamonti et al, 2017). Despite these achievements, “the neural underpinnings of mapping of print onto sound and meaning remain unclear” (Kemmerer, 2015).

The Problem

The objectives of the current study were 1) reporting a new case of acquired dyslexia, 2) analyzing the performance of case with alexia on several tasks related to PALPA which is based on the dual-route model of reading, 3) discussing implications of the results for the neural underpinnings of alexia, and 4) expanding our understanding of the cognitive and neural mechanisms underlying errors in reading.    

Procedures

Subject. LK, a 45 -year-old male high school teacher with a history of left hemisphere stroke and non-fluent aphasia is the subject of this study. Neuroimaging (CT scan) study done 18 months post-onset revealed infarcts involving the left temporal region extending up to a portion of the left frontal lobe and left basal ganglia.

Analyses

Clinical evaluation comprised of administering BDAE, BNT, Token Test, Reading Comprehension Battery for Aphasia, and Discourse Comprehension Test. Experimental reading evaluation was completed using selected, pertinent subtests of PALPA.   

Results

Word recognition performance was generally good except for the non-word and homophone tasks. Impaired performance on regular, irregular and pseudowords point to a diagnosis of deep dyslexia. Error analyses identified semantic errors (for both regular and irregular words), visual errors, mixed semantic-visual and phonological errors.

Conclusions/Discussion

The characteristics of reading performance in LK match that of the category of deep dyslexia confirming the applicability of the dual-route model. However, Coltheart’s (1980) explanation of the right hemisphere’s causal role for the occurrence of semantic paralexia could not be directly verified. On the other hand, LK’s extensive lesion of the left hemisphere, especially the temporal and frontal lobe can explain semantic errors. The results of the current study can also be discussed in the context of recent investigations that support the conceptualization of a neural-network supporting reading (Binder et al, 2003, Baldo et al, 2017; Hoffman, Lambon Ralph & Woollams, 2015 ). The emerging new conceptualization offers insights into the neural underpinnings of both successful word readings as well as reading errors (Pillay et al, 20017).

Modified semantic feature analysis for the anomia and dysgraphia: A case study in Chinese

ABSTRACT. Introduction

Semantic Feature Analysis (SFA) is an intervention that aims to improve the naming performance of neurogenic patients with anomia using a structured framework that guides the patients to analyze the semantic features of the naming targets. A recent single-case study (Tam & Lau, 2019) has reported evidence that SFA, modified with the use of an unstructured odd man out task, was also effective in improving the naming performance of a patient with anomia after surgical intervention of atrioventricular malformation. The improvement was attributed to the procedures of the odd man out task which encouraged detailed comparison of distinctive semantic features that facilitated semantic processing. The current study aims to replicate the findings of this modified SFA with adaptations for a Cantonese-speaking individual with anomia resulted from traumatic brain injury (TBI). In addition, the extent to which the patient’s written naming performance was improved after the treatment was also observed.

 

Methods

YFS, a 72-year-old right-handed female Cantonese speaker with naming difficulties due to TBI four years before the study was recruited. No visual, hearing, or motor impairment was reported. Results of initial assessments indicated that she had a preserved semantic system but poor oral and written naming abilities. Twelve treatment sessions were conducted over six weeks using the modified SFA.

 

Results

Table 1 summarizes the oral and written picture naming performance of YFS in the pre-treatment, post-treatment and maintenance phases. Results of McNemar’s tests indicated that YFS showed significant improvement in oral picture naming of 217 selected pictures in Snodgrass and Vanderwart (1980) immediately after [X2(1) = 8.446, p < .05] and two weeks after treatment [X2(1) = 22.753, p < .05]. Improvement of accuracy in written naming was not statistically significant, but Chi-square test results indicated a significantly reduction in semantic errors was observed.

 

Conclusions

The current study extended the findings of Tam & Lau (2019) that the modified SFA is also effective in promoting the naming performance of neurogenic patients with anomia resulted from TBI. YFS’s reduction in semantic errors in the written naming performance after the treatment also supported the importance of the lexical-semantic route of writing in Chinese (e.g. Lau, 2020). Theoretical and clinical implications as well as specific adaptations we applied to accommodate the cognitive diversities associate with TBI will be discussed.

Effects of Lexical Retrieval Treatment on Written Naming in Primary Progressive Aphasia
PRESENTER: Carly Miller

ABSTRACT. Introduction

Primary progressive aphasia (PPA) is a speech-language syndrome caused by neurodegenerative disease (Mesulam, 2001). The semantic (svPPA) and logopenic (lvPPA) variants are characterized by prominent anomia and selective written language deficits (Gorno-Tempini et al., 2011; Henry et al., 2012). Written naming is a functionally relevant outcome that is rarely assessed in individuals with PPA, although spelling intervention has shown promise in remediating written language deficits in this population (e.g., de Aguiar et al., 2020; Tsapkini & Hillis, 2013; Tsapkini et al., 2014, 2018). In the current study, we evaluated whether written naming ability improves following treatment primarily focused on spoken naming in participants with sv and lvPPA.

 

Methods

Participants (n=24 lvPPA; n=15 svPPA) were administered Lexical Retrieval Treatment (LRT), which targets spoken naming via a series of tasks designed to capitalize on residual semantic, orthographic, and phonological knowledge (Henry et al., 2019). Importantly, production of the orthographic word form is a component within the training hierarchy during clinician-led sessions as well as home practice (modified Copy and Recall Treatment, or CART, Beeson & Egnor, 2006).

Written naming probes for trained items (eight sets of five nouns) and matched, untrained items (two sets of five nouns) were collected at pre- and post-treatment. We examined the effect of LRT on written naming accuracy by calculating the proportion of correct letters for each word at each timepoint (Goodman & Caramazza, 1985). To examine whether written responses effectively conveyed the participants’ intended meaning, blinded coders attempted to identify the target based on written responses. Responses were coded as 1 (correctly recognized) or 0 (not recognized) and summed for trained and untrained sets.

 

Results

Letter-by-letter scoring was highly reliable between two independent raters (ICC =.93 95% CI [.92, .94], F (780, 780)= 27, p < .0001). Accuracy data were analyzed using 2 x 2 mixed ANOVAs (between-subjects factor = lvPPA/svPPA, within-subjects factor = timepoint).

For trained sets, the main effect of timepoint (< .001) and the interaction (= .01) were significant, indicating that both groups of participants improved from pre- to post-treatment (Mpre = .18, Mpost = .76) and that individuals with svPPA performed worse at pre-treatment and better at post-treatment relative to individuals with lvPPA. For untrained sets, there was a significant main effect of timepoint (< .001), reflecting better performance at post-treatment for both groups (Mpre = .20, Mpost = .32). Wilcoxon signed-rank tests revealed that target items were more recognizable at post-treatment for trained sets for individuals with svPPA (Z = -3.38, p < .001) and lvPPA (Z = -4.13, p < .0001), but not for untrained sets of items (p's > .2).

 

Conclusion

Overall, our findings indicate that naming treatment incorporating orthographic self-cueing and CART leads to improved written naming for trained and untrained words in individuals with lvPPA and svPPA. Moreover, improved recognition of target words by naïve readers supports the functional utility of this treatment. Future studies should evaluate maintenance of written naming at follow-up timepoints.

Identifying Phonological Planning Deficits Independent of Apraxia of Speech
PRESENTER: Natalie Busby

ABSTRACT. Introduction

Individuals with aphasia may have numerous co-occurring deficits, which can be difficult to isolate. Apraxia of speech (AoS) is a distinct speech production disorder  that can occur independently or alongside other language disorders (Ogar et al., 2005). It may mask problems associated with co-occurring disorders such as inner-speech deficits (Stark et al., 2017), making it difficult to identify the root of incorrect or null responses on speech tasks. However, to produce patient-specific impairment-based interventions, we need to have a good understanding of the functional deficits within individuals. Therefore, the aims of this project were to

  1. identify patients who have deeper phonological deficits which are not masked by AoS
  2. identify to what extent AoS is independent of phonological planning problems

 

Methods

Participants were in the chronic stage of recovery (N = 100, 42 Females; stroke age M = 55.79 years, SD = 11.93, >6 months post-stroke), following a left-hemisphere stroke with no accompanying neuropsychological disorders. They completed a battery of behavioral tests including subtests of the Psycholinguistic Assessments of Language Processing in Aphasia (PALPA; Kay et al., 1996), Naming 40 (picture naming task; Fridriksson et al., 2006, 2007), and the Aphasia Severity Rating Scale (ASRS; Strand et al., 2014). PALPA14 (covert (i.e., non-auditory) rhyme judgement requiring picture selection) was identified as a test of ‘inner-speech’ or phonological encoding accuracy without articulation.

 

Individuals with similar deficits were grouped using hierarchical cluster analysis based on PALPA14, Naming 40, and AoS severity scores. To identify whether deficit patterns or PALPA14 scores could be uniquely predicted by other behavioral variables, stepwise linear regression analyses were conducted using 12 other behavioral scores in SPSS.

 

Multivariate voxel-based lesion-symptom mapping (VLSM) analyses using Freedman-Lane were conducted using PALPA14 and AoS severity scores to identify whether articulation (phonetic planning) and inner-speech (phonological planning) deficits are associated with different underlying neural substrates (2,000 permutations, p < 0.05) using the NiiStat toolbox for Matlab.

 

Results

Four groups emerged from the hierarchical cluster analysis and are summarized in Table 1.

Regression results reveal aphasia severity (WAB-R AQ; Kertesz, 2007) and PALPA17 performance (auditory segmentation) together predicted 35% of the variance in PALPA14 scores (F(1,86) = 35.39, p < 0.001, R2 = .32). No significant predictors were found for group membership based on hierarchical clustering analysis.

 

Multivariate VLSM analyses associated AoS with damage to inferior frontal gyrus opercularis, precentral and postcentral gyri and superior longitudinal fasciculus. In contrast, PALPA14 scores were associated with damage to posterior and retrolenticular portions of the internal capsule.

 

Conclusions

Individuals were identified with phonological planning problems independent of AoS or a naming deficit. This supports the notion that phonological planning is an independent component in speech production, augmented by the association of differential neural substrates between AoS and phonological planning deficits. However, PALPA14 does require a heavy cognitive load and may have been too difficult for some individuals, so that its hypothesized status as a transparent window on inner speech planning should remain under scrutiny as well.

STEPS in sign language: The pattern of errors made by sign language users with impaired POB
PRESENTER: Neta Haluts

ABSTRACT. Introduction

In spoken languages, individuals with Phonological Output Buffer (POB) impairments make phonological errors (i.e., substitutions, omissions, and insertions of phonemes) in production, repetition, and reading aloud of morphologically simple words and nonwords, whereas they make whole-unit errors (i.e., substitutions, omissions, and insertions of whole-units from the same category) in morphological-affixes, function-words, and number-words (e.g., substituting a function-word with another function-word, a morphological-affix with another morphological-affix etc., Cohen et al., 1997; Delazer & Bartha, 2001; Dotan & Friedmann, 2015; Gvion & Friedmann, 2012; Marangolo et al., 2005), a phenomenon called STEPS – Stimulus Type Effect on Phonological and Semantic errors (Dotan & Friedmann, 2015). This pattern can be explained by assuming that these categories are stored as pre-assembled phonological units in dedicated mini-stores within the POB.

Sign languages exhibit some unique morphological structures, such as classifier constructions, morphological facial-expressions, agreement-verbs, and numeral incorporation (NI), which may be susceptible to whole-unit errors. In this study we aimed to identify for the first time deaf signers with low/impaired POB and examine how the STEPS phenomenon is expressed in a language of the visuo-spatial modality.

 

Methods

We tested deaf native-signers of Israeli Sign Language (ISL) using 5 sequence-recall tests we developed to identify LOOPS – participants with LOw Output Phonlogical Spans suspected to have impairment to the POB. Then, we compared the performance of the LOOPS and the controls in 3 tests including structures suspected to be sensitive to whole-unit errors:

(1) “Triplets” test – production of classifiers and morphological facial-expressions. In this task the participant was presented with sets of 3 pictures of a similar object different in one feature (e.g., three chairs different in size) where one of the objects was marked. The participant was requested to sign to the experimenter which object was marked. The different features were selected such that they would elicit the use of classifiers and morphological facial-expressions (adjectives/adverbs).

(2) Repetition of sentences with morphological structures (classifier constructions, facial-expressions, and NI) and function-signs.

(3) Comprehension and production of agreement-verbs –the experimenters perform an action (e.g., giving the participant a strawberry). Then, the experimenter signs a sentence that describes the action, and the participant decides whether the sentence correctly describes the action, and if not produces a full sentence describing the action. The target sentences require the use of agreement-verbs.

Results

Just like speaking POB patients, the LOOPS made more phonological errors than the controls in morphologically simple signs and in bases of morphologically complex signs, but made whole-unit errors in number-signs, function signs, and morphological-affixes. Unlike the controls, the LOOPS did not show a recency effect when repeating lists of signs, mirroring the pattern reported by Vallar & Papagno (1986) for a hearing pWM-impaired patient.

 

Conclusions

The error pattern of the POB-impaired signers was similar to the pattern reported for spoken-language users. These findings show that similar impairments to pWM mechanisms can be found in sign-language users and in speakers of spoken-languages, and suggest that similar pWM mechanisms are responsible for both sign-language and spoken-language processing.

Phonological Input or Output? A Case of Phonological Input Deficits in Logopenic Primary Progressive Aphasia

ABSTRACT. Introduction

Logopenic variant PPA (lvPPA) is characterized by sentence repetition deficits (Gorno-Tempini et al., 2011). Repetition errors in lvPPA are often attributed to phonological working memory (P-WM) deficits, but there has been little research localizing impairments to input or output phonological processes. We present evidence from CLR1796, an individual with lvPPA, who showed selective disruption to phonological input processes with relatively intact phonological output.

 

Case History

CLR1796 was 72 years old with 12 years of education. He was 1.5 years post-symptom onset, prior to which he had no history of other neurologic impairments or learning disabilities. The Mini-Mental State Exam (Folstein et al., 1975) indicated a mild cognitive impairment and the MRI showed diffuse atrophy. He did not have apraxia.

 

Results

Evidence for input-specific phonological impairments

CLR1796 had nearly perfect oral reading but impaired repetition of sentences, words, and nonwords, including phonological and semantic errors in repetition. The presence of semantic errors in repetition (e.g., jab - stick) supports disruption affecting the lexical level. His few reading errors were almost exclusively regularizations of irregular words (e.g., pronouncing “sew” as “sue”). This dissociation between repetition and reading points to a spoken input as opposed to output deficit.

 

Disruption to multiple phonological input processes

We further investigated phonological input at the levels of phonetic processing, P-WM, and the phonological input lexicon. Phonetic processing was assessed using the first subtest of the PALPA (Kay et al., 1996). CLR1796 showed impairments on PALPA 1 nonword minimal pairs (69.4%). When given a modified version of PALPA 1 with reduced P-WM demand (“different” trials presented visually; CLR1796 was asked to identify a spoken target by pointing to one of two written nonwords) several weeks after the original PALPA 1 administration, he scored 91.7%. His relatively high score on this task suggests P-WM impairments may have contributed to seemingly poor phonetic processing on the original PALPA 1. His poor performance on the rhyme probe task from the Temple Assessment of Language and Short-Term Memory in Aphasia (TALSA) (Martin et al., 2018) is also indicative of a P-WM impairment. Phonological input lexicon tested via PALPA 5 Auditory Lexical Decision revealed particularly poor performance on low frequency and low imageability items.

 

Modality specificity of Working Memory (WM) impairment

To determine whether CLR1796 has a domain-general WM deficit or a specific P-WM deficit, we assessed WM performance in the visuospatial domain. Visuospatial WM was measured with computerized Corsi blocks (Mueller et al., 2014), and his forward span was 4, which is within normal limits.

 

Conclusions & Future Directions

CLR1796 has an input-specific phonological impairment affecting multiple input components including P-WM and the phonological input lexicon. Orthographic and visuospatial processing were relatively intact compared to phonological operations. An ongoing case series investigation explores how commonly input-specific vs. output-specific phonological impairments are observed in lvPPA. Gaining a better understanding of the underlying impairment in lvPPA may lead to the development of more beneficial and targeted treatments as well as diagnostic tools.

The Interaction of Auditory Processing and Semantic Processing in Wernicke's Aphasia
PRESENTER: Holly Robson

ABSTRACT. Background

The auditory comprehension impairment in Wernicke’s-type aphasia (WA) is clinically challenging at the chronic state. Limited therapy research has targeted phonological input, lexical and semantic stages of comprehension with inconsistent findings. More positive outcomes are associated with higher therapy dosage (>60hrs) (e.g. Fleming et al., 2021).  Along with dose, specificity is a key aphasia treatment principle – i.e. treatment should focus on the primary impairment (Kiran & Thompson, 2019). The cognitive profile of chronic WA is best described as a combination of acoustic, phonological and semantic processing impairments (Robson et al., 2012).   A systematic link between acoustic-phonological processing and auditory comprehension has been repeatedly demonstrated at the chronic stage (e.g. Robson, Grube et al; 2013), however, similar associations with semantic processing have not. Given the integral role of semantic analysis in auditory comprehension, the lack of associative evidence may result from small under-powered studies. This study collated data from a large group of participants with WA across multiple neuropsychological measures to explore the relationship between auditory comprehension, acoustic-phonological and semantic processing.

 

Method

A multiple regression analysis was performed using normalised neuropsychological data from 37 participants with chronic WA. The dependent variable, auditory comprehension, was derived from four published and unpublished comprehension assessments covering single word, phrase and sentence levels. Two independent variables were derived from a broad assessment battery: (1) Acoustic-phonological processing (frequency and dynamic modulation detection, word and nonword phonological discrimination) and (2) Semantic processing (non-verbal semantic association tests and written synonym judgement tests). Age, peripheral hearing thresholds and time-post-stroke data were available for 35 participants and entered as covariates. Lesion volume was available for 32 participants and explored as a covariate in a secondary analysis.     

 

Results

A model containing auditory processing and semantic processing as main effects with the covariates was a better fit to the data than a model with covariates only (F(2,29) = 12.783, p<0.001). In this model only auditory processing was a significant predictor of Auditory Comprehension (Beta = 0.857 (SE = 0.286), t = 2.996, p<0.01). A model containing the interaction between auditory and semantic processing also significantly predict Auditory Comprehension (Beta = 0.813 (SE = 0.147), t = 5.536, p<0.001). The interaction showed that when auditory processing was poor, better semantic processing improved Auditory Comprehension a moderate amount. However, when auditory processing was better retained, better semantic processing resulted in greater gains in Auditory Comprehension. Including lesion volume as a covariate did not change the results.

 

 

Conclusions

Impaired Auditory processing is a key driver of comprehension impairments in WA. Retained semantic processing has only a limited capacity to compensate for a heavily disrupted input signal.  However, comprehension is influenced by semantic processing to a greater extent in individuals with better retained auditory processing. Therapy should focus on auditory processing in severe cases of WA whereas mixed auditory and semantic therapy is appropriate for milder impairments.

Contribution of phonology and semantics to verb inflection deficit in post-stroke aphasia

ABSTRACT. Introduction

Aphasia can include both phonological (word sound) and semantic (word meaning) impairments (Beeson et al., 2018; Rapcsak et al., 2009). These deficits often co-occur with impaired grammar and verb inflections (Bird et al., 2003; Faroqi-Shah & Thompson, 2003; Thompson, Kielar, & Fix, 2012) and are interdependent, as regular inflection depends on phonological transformations (press→pressed), whereas irregular grammar relies more on contributions of semantic relationships between words (i.e., the past tense of ring is rang not ringed*) (Kielar et al., 2008; Kielar & Joanisse, 2010). The goal of the current study is to understand how deficits in the phonology and semantics of language contribute to verb inflection impairment in post-stroke aphasia.

 

Methods

The participants were 13 individuals (Age: M= 59y, SD =4.9; ED: 16y, SD = 0.8; 8 males, right handed) diagnosed with chronic aphasia (TPO = 6.3y) resulting from left hemisphere stroke and 14 age and education matched healthy controls. Phonological Skills were measured using the Arizona Phonological Battery (APB) (Beeson et al., 2016; Rapcsak et al., 2009). Semantic Knowledge was assessed using the Camel and Cactus Test (Adlam et al., 2010); the spoken word-to-picture and written word-to-picture matching tasks (PALPA 47 and 48) (Kay et al.,1996), and an auditory synonym judgment test (PALPA 49). Semantic processing specific to verbs was assessed using synonym judgements of verbs (Patterson et al., 2001). Inflection of regular and irregular verbs was assessed using past tense elicitation task with words and pseudowords (e.g., Susan likes to walk/feep. Yesterday she __walked/ fept).

In a cross-modal ERP priming task, participants performed a lexical decision on a visual target, e.g., heard “baked” and was visually presented with a word or nonword (e.g., BAKE or SMOB) to make the button-press decision, “Is this a real word or nonword?”

 

Results

To examine the degree to which phonological and semantic skills predict past tense inflection ability we performed linear regression with past tense scores as the dependent variable and phonological and semantic composites as predictor variables . After accounting for comprehension and production, phonological scores were a significant predictor of regular past tense inflection (b = .736, p =.006) and weak-irregulars scores (b = .625, p = .030). Semantic composite was a significant predictor for all irregulars (b = .639, p = .025) and strong-irregulars, (b =.601, p = .05). For regularized pseudo-words, phonology emerged as a significant predictor of performance (b = .805, p =.002). Production ability was not a significant predictor of the past tense inflection scores (b=-.474, p = .143) or pseudo-words (b = -.425, p = .133). Priming effects (reaction time unrelated-related) for inflected verbs in our ERP experiments are indicative of phonological or semantic deficits in participants with aphasia.

 

Conclusions

The results indicate that past tense inflection ability for real verbs and pseudo-words can be predicted from the underlying phonological and semantic impairments. Although phonological skills are crucial for both regular and irregular verb inflection, semantic impairment impacts inflection of strong-irregular verbs to a greater degree.

Paraphasia in Two Forms of Conduction Aphasia

ABSTRACT. Introduction

Two forms of conduction aphasia (CA) are currently identified in clinical literature: repetition conduction aphasia (CA) and conduit d’approche (CD). Conduction aphasics tend to produce phonemic/phonological paraphasia (PP) (Caplan, Vanier, & Baker, 1986., Gagnan et al, 1997., Schwartz et al, 2004., Ueno & Lamdon-Ralph, 2013). Although PP are the sine qua non of CA, the aphasics with CD tend to produce several phonological approximations to the target. Hence, the CD appears to be more phonologically impaired than CA. A current neurobiological model attributes impaired dorsal stream as the neural underpinning of CA, whereas the semantic paraphasia is the result of the ventral semantic stream (McKinnon et al, 2018).  However, it is not known whether these two forms of conduction aphasia have similar neurological underpinnings. The objectives of the current study were to report on paraphasia in two types of conduction aphasics and their lesion characteristics.

 

Method

Subjects. JL, a 66-year-old white female bank employee, had a stroke and consequently developed conduction aphasia. She had exhibited an urge to self-correct her speech production errors (conduit d’approache). Phonemic and semantic paraphasia were the predominant characteristics of her speech production. JL’s CT scan evaluation revealed a left parietal lobe lesion including supramarginal and angular gyrus and posterior temporal lobe lesion. PP, a 65-year- old Caucasian female, suffered a stroke. At the acute and subacute stages, PP’s language performance profile was that of Wernicke’s aphasia. However, at two years post-onset, her language profile was that of conduction aphasia. PP did not attempt to self-correct her literal paraphasia to the extent JL did. A CT scan evaluation revealed a non-hemorrhagic infarct in the left parietal including supramarginal gyrus and angular gyrus and mesial posterior temporal lobe.

Procedure. Two tests were administered: Boston Diagnostic Aphasia Examination (BDAE), and 2) Boston Naming Test. All paraphasic responses were derived from the confrontation naming (BNT) responses.  These were broadly classified into phonologic, neologistic, semantic and mixed types of paraphasia. All subjects were 2-years post-stroke chronic condition. It is important to note that BNT includes target words that are of high, mid-, and low- frequency of occurrence. Semantic aspects of language are represented in lexical-semantic tasks of production and comprehension in BDAE.

 

Results and Discussion

Both subjects produced both phonologic and semantic types of paraphasia, but the PP outnumbered the semantic paraphasia (Table 1). This pattern of performance is probably the result of lesions both in the dorsal and ventral streams. Discussed further is the variability in the performance of the subjects. A significant observation pertains to the effect of word frequency on the occurrence of paraphasia in both subjects. For intervention, both lexical-semantic and phonological treatments can be tried. Cases with comparable, though not identical, sites of lesion produced uniquely different patterns of paraphasia. The limitation of this study includes the lack of understanding of the extent of axonal loss in each case which makes it difficult to interpret the patterns of paraphasia in the context of the ‘Dual-stream Model’ described by McKinnon and colleagues (2018). 

Word Class-Based Clustering and Switching Analyses of Phonemic Fluency in Alzheimer’s Disease
PRESENTER: Eunha Jo

ABSTRACT. Introduction

Verbal fluency tasks are well known to sensitively detect cognitive-linguistic declines in Alzheimer’s disease(AD)(Murphy et al., 2006). Word class dissociations have been a critical issue in research on cognitive and linguistic deficits of neurological diseases. However, no studies examined whether word class dissociations can be identified in phonemic verbal fluency in AD and how the word class-based analyses of clustering and switching behaviors affected overall performance on the fluency measures. The current study investigated whether the word class dissociations emerged in the phonemic fluency task and explored the best predictors to account for the number of correct responses among word class-based clustering and switching behaviors in addition to demographic variables of AD.

 

Methods

Participants were 58 individuals with probable AD from the dementia bank project, Pitt Corpus(Becker et al., 1994). Participants generated words beginning with f for 60 seconds. We categorized the word class for each item and analyzed word class-based mean cluster size and number of switching.

 

Results

Word Class Analyses

Nouns were the most frequently generated word class, consisting of 71% of the total words, followed by verbs(15%) and adjectives(13%). The proportions of adverbs and prepositions were less than 1%, which were excluded from the following regression analyses.

 

Multiple Regression Analyses

To examine the best predictors for the number of correct responses, we conducted stepwise multiple regression analyses with word class-based mean cluster size, the number of switches, and demographic variables as predictors. The results revealed that the models with the number of switches, F(1,56)=61.946, p<.0001, R2=.525, and with the number of switches and mean cluster size, F(2,55)=44.911, p<.0001, R2=.620, significantly predicted the number of correct responses, suggesting that the number of switches is the most influential predictor for correct responses, accounting for 52.5% of the total variance.

Furthermore, we explored significant predictors for the number of switches as a dependent variable with the numbers of nouns, verbs and adjectives, demographic variables as independent variables. The models with the number of verbs, F(1,56)=61.060, p<.0001, R2=.522, with the numbers of verbs and adjectives, F(2,55)=98.468, p<.0001, R2=.782, with the numbers of verbs, adjectives and nouns, F(3,54)=76.270, p<.0001, R2=.809, and with the numbers of verbs, adjectives, nouns and education, F(4,53)=62.071, p<.0001, R2=.824, significantly predicted the number of switches. Results indicate that the most influential variables are the number of verbs, which explains 52.2% of the variance.

 

Conclusions

The current results revealed a strong advantage for nouns over verbs or adjectives in line with previous findings showing that individuals with AD have more difficulties in retrieving verbs than nouns(Cotelli et al., 2006). Switching contributed most to increasing the correct responses. Although the nouns are the most frequently generated word class, verbs turned out to be the most crucial factor for facilitating switching, indicating that the abilities to generate more verbs are related to elicit more switching behaviors. The results suggest individuals with AD who can activate a diverse linguistic word class can successfully generate more numbers of correct responses with more frequent switching behaviors.

Changes in Effective Connectivity Following Language Treatment for post-stroke patients with Aphasia
PRESENTER: Tammar Truzman

ABSTRACT. Introduction

Background: In recent years, many studies focused on the mechanisms underlying language rehabilitation after left hemisphere stroke. Some studies suggest that normalization of the language network, is crucial for language recovery, while others suggest that compensatory processes, such as right hemisphere involvement in language processing, support language recovery. Examining changes in brain connectivity during language therapy can shed new light on this question. Furthermore, our second aim was to examine to what extent treatment related changes in brain connectivity are specific to the treated linguistic process (i.e., phonology), or whether they generalize to other neurolinguistics processes (i.e., semantics).

 

Methods: This is a reanalysis of reported data(Leonard et al., 2015; Leonard, Rochon, & Laird, 2008; Rochon et al., 2010). Four participants with aphasia (PWA) and anomia following left hemisphere stroke and eight healthy controls (HC) participated in the study. Two fMRI scans were administered for all participants with 3.5-month interval on average. In the time between the two fMRI scans, PWA underwent phonological component analysis treatment (PCA). The fMRI scans included phonological and semantic tasks and a perceptual matching control task.

Analysis: Dynamic Causal Modelling (DCM) was used to examine effective connectivity among three right hemisphere regions: dorsal IFG (rdIFG), ventral IFG (rvIFG), and lateral temporal cortex (rLTC). The analysis was conducted separately for the phonological and semantic tasks, and all possible connections were included in the model. We identified connections averaged across the linguistic and perceptual condition in each task (A matrix) and connections that were modulated only by the language (phonological or semantic) task (B matrix). For these connections, we asked which changed from pre- to post-treatment in PWA but not in HC.

 

Results

1) The averaged connectivity across all conditions (A matrix), changed in three connections from pre- to post treatment only in PWA: bidirectional rvIFG↔rLTC in the phonological task, and self-connection of rLTC in the semantic task, all increasing in resemblance to HC.  Because these conditions reflect common lexical access components, which is typically associated with a bilateral network, the increased resemblance to HC may reflect normalization of connectivity in the intact RH. 2) The modulatory effect of the phonological condition (B matrix) on the connection rLTC → rdIFG was strengthened during treatment only in PWA, unlike HC in whom this effect is inhibitory. Because phonological processing is typically associated with the left hemisphere, this change may reflect compensation. No changes were found in the effect of the semantic condition.

 

Conclusions

Following language treatment, we found changes in the connectivity among RH homologs to language regions in PWA. The results indicate that both compensatory and normalization processes play a role in language recovery, and both may simultaneously underlie the involvement of the RH in the chronic phase of aphasia. Most treatment related changes in the current study were associated with phonological processing, which was the focus of treatment, with indication of changes in connectivity associated with semantic processing. Nevertheless, the small sample size used in the current study limits its generalization.

Measuring pragmatic competence of discourse output among Chinese-speaking individuals with traumatic brain injury
PRESENTER: Ho Ying Lai

ABSTRACT. Introduction

Pragmatic competence is the ability to effectively use language in a contextually appropriate fashion. Previous studies suggested that many individuals with traumatic brain injury (TBI) had relatively intact language ability but demonstrated difficulties to communicate appropriately and effectively across contexts due to their impaired pragmatic skills (Dahlberg et al., 2007). Most previous studies have focused on discrete levels of linguistic analysis of TBI discourse production and often neglected one’s pragmatic competence. This study aimed to examine how pragmatic competence may be impaired and reflected in the discourse produced by TBI survivors. Moreover, whether (and which) discourse production task can be more sensitive and clinically effective to highlight pragmatic impairments in TBI would be explored. 

 

Methods

Language samples of five discourse tasks, produced by ten TBI survivors (five Cantonese and five Mandarin speakers) and ten controls matched in age and education, were extracted from the unpublished Chinese TBI-Bank (see database description in Kong, Lau, & Cheng, 2020). These genres included a single picture description ‘Cat Rescue’, a multiple-picture description ‘Refused Umbrella’, a story-telling ‘The Boy Who Cried Wolf’, a procedural discourse ‘Egg and Ham Sandwich’, and a personal narrative (i.e., monologue) ‘An Important Event’. Each sample was analyzed with 16 indices, adopted and modified from Andreetta et al. (2012), Cummings (2021), Galski et al. (1998), and Kong and Law (2004), which were further categorized in terms of Grice’s Maxim (Grice, 1975):

  • Maxim of Quality: i) Number of error (Er), ii) Index of Error (IEr), iii) Index of Syntactic Accuracy (ISA), and iv) Repairs and revisions of error
  • Maxim of Quantity: v) Total number of words per task (N), vi) Number of information words (I-word), vii) Number of Terminable units (T-units), and viii) Words per T-unit
  • Maxim of Relation: ix) Global coherence errors, x) Percentage of global coherence errors, xi) Local coherence errors, and xii) Percentage of local coherence errors
  • Maxim of Manner: xiii) Repetition of words and phrases, xiv) Index of Lexical Efficiency (ILE), xv) Index of Communication Efficiency (ICE), and xvi) Number of cohesive devices per T-units

 

Preliminary results and Discussion

Preliminary results suggested that speakers with TBI had more deficits in Maxim of Relation and Maxim of Manner, but the pragmatic impairments seemed to be highly individualized. The TBI speakers’ pragmatic performance also tended to be related to their attention and visuospatial problems, as reflected by their scores on the Cognitive Linguistic Quick Test (CLQT; Helm-Estabrooks, 2001). Specifically, an increased violation of Maxim of Relation was found in genre where there was decreased amount of visual supports. More global coherence errors were also found in procedural discourse than in storytelling, but a clear genre effect could not be concluded.

Further data analyses are underway. The association between pragmatic measures and the types of discourse, amount of visual supports, and TBI survivors’ severity of language impairment and cognitive deficits will be assessed. We believe the final findings will allow us to examine pragmatic deficits in TBI and to compare the manifestation across different genres.

Lessons in Memoirs Reflect Author Identity on the Journey toward Communicative Recovery

ABSTRACT. Introduction

 

Written memoirs of people who have aphasia offer a rich, accessible source of insights into their lived experience. In the tradition of narrative medicine, aphasiologists seek to “recognize, absorb, interpret, and be moved by” stories of people with aphasia, equipping us to be “more humane, more ethical” in our clinical practice and research (Charon, 2006, p. vii). Memoirs capture, in perpetuity, the natural evolution of stories following traumatic onset of stroke and aphasia, as they intersect with selfhood: from Chaos to Restitution to Quest (Frank, 1995).  In initial chaos, “the suffering is too great for a self to be told.”  Later, in restitution, “the active player is the remedy (intervention) … they are self-stories only by default.”  Finally, quest stories “accept illness and seek to use it,” given the self’s belief that “something is to be gained through the experience” (Frank, 1995, p. 115).  Frank notes, “most published illness stories are quest stories” (p. 115). 

 

A study of oral stories of American World War II veterans (Ulatowska et al., 2020), discerned central themes in stories of quest and reconciliation: gratitude for survival; sharing of lessons, as legacy; and the key role of identity.  In aphasiology, thematic analysis of quest stories is a relatively new approach to narratology (Ulatowska 2010, 2014). Traditionally, the field has focused on linguistic deficits in elicited oral narratives; early studies of memoirs, too, focused on linguistic impairments (Ulatowska et al.,1979). The present study examines content of memoirs of authors with aphasia, as shaped by each author’s identity. One of the earliest, substantial memoir contributions (Luria, 1972) is included in the sample.   

 

Methods

 

Sample: twenty-seven books and six articles, memoir genre, authored by people impacted by aphasia; all in English or English translation; most authored or co-authored by people with aphasia, and a few by an intimate. Authorship included representation from American, British, Polish, Swedish, Australian, and Russian cultures. Most authors were well educated. Some were professional writers pre-morbidly.  Content analysis: 1) metacommentary on the pragmatic purpose and process of writing; 2) quest-oriented content; 3) reflections of identity and culture on quest-oriented lessons. 

 

Results

 

Both professional and non-professional writers commented on the pragmatic need to write their stories, despite the difficulty of the writing process. Quest-oriented content included: gratitude for survival, acceptance of limits on recovery, and implicit and explicit lessons for the readership.  Lesson content suggested identity- and culture-specific influences in the recovery process, including religion/spirituality, professional identity, gender, and (re-)connection with intimates.   

 

Conclusions

 

Content analysis of memoirs of authors with aphasia provides a unique window into their lived experience, legacy, identity, and culture. This pragmatic focus on communicative competence (Olness & Ulatowska, 2020) complements traditional linguistic narratology.  The method may exclude representation of cultures in which written personal stories are inappropriate. Yet, memoir writing, which allows unfettered time for composition, may be ideally suited as a timeless contribution in human legacy. As one memoir author with aphasia notes (Luria, 1972), ‘writing is exhausting, a titanic effort, which confirms one’s humanity.’

Embedding in language and in thinking: A double dissociation in aphasia and aTOMia

ABSTRACT. We examined a fundamental question about the relation between language and thought: Whether a dissociation can be detected between language and thought by examining whether the ability to embed thoughts (as part of theory of mind, TOM) relies on the ability to create and comprehend syntactic embedding structures that express these states (e.g., "Dana thought that the world is flat"). 

We tested four patients with agrammatic aphasia, which involved a syntactic deficit in embedding, identified using five comprehension, production, and grammaticality-judgment syntactic tasks from the BAFLA battery (Friedmann, 1998). Four others had aTOMia (TOM deficit), diagnosed using 16 stories and 4 cartoons from the aTOMia battery, assessing second-order theory of mind abilities (Balaban et al., 2016). 

The results showed a double dissociation between embedding in language and embedding in thinking. The four participants with a TOM deficit performed very well on the comprehension and production of syntactic embedding (95%) while their performance on the aTOMia battery was poor (36%). In contrast, the participants with agrammatic aphasia were able to represent second-order mental states (90%) but still showed a significant impairment in the comprehension and production of syntactic embedding (46%). 

This double dissociation indicates that once the abilities of linguistic embedding and mental embedding are acquired, they are independent and each can be compromised independently. Our study indicates that individuals with aphasia can retain the ability to think about other people’s thoughts even when they lose syntactic abilities; one can lose the ability to speak about something, but still be able to reflect about it and understand it. This indicates that at least in this domain thought is not completely dependent on language. Creating a dialogue about other people’s thoughts with those who have lost crucial aspects of their language ability is possible and desirable.

Pragmatics in (non-)typical handers: in search for evidence of reversed localization
PRESENTER: Olga Buivolova

ABSTRACT. Introduction

Pragmatic abilities refer to the set of skills including holding an appropriate conversation in a given context, and correct usage of non-literal and figurative expressions (e.g., idioms and humor) and non-verbal communication means (e.g., gestures and proxemics; Parola et al., 2016). Most of the studies attribute pragmatic processing to cortical structures of the right hemisphere (RH; e.g., Cutica et al., 2006). However, there are many open questions regarding the RH involvement in pragmatic processing. One of them is neural organization of pragmatics in people with non-typical handedness (e.g., in left-handers). For instance, there is limited evidence that left-handed might present a reversed pattern which implies that pragmatics is processed in the left hemisphere (LH; Gloning et al., 1969). The aim of the present study is to explore the brain substrates of the pragmatic abilities in people with typical and non-typical handedness by researching the effects of RH lesions on pragmatics in these two groups.

 

Methods

A case-series approach is used. Currently, five people with a chronic RH stroke participated in the study. Two were left-handed, and 3 were right-handed. All participants were tested with the Russian Aphasia Test (RAT; Ivanova et al., 2019) to evaluate the presence of the language deficit, and with the Test for the Assessment of Pragmatic Abilities and Cognitive Substrates (APACS; Arcara & Bambini, 2016; Russian version: Tomas et al., in preparation). All participants underwent a standard clinical structural MRI.

 

Results

None of the participants demonstrated language impairment. At the same time, left-handed participants scored below cutoff on APACS meaning that pragmatic abilities were impaired. No pragmatic deficits were revealed in the right-handers. However, by coincidence, all the left-handed participants had a cortical lesion, and all the right-handed participants had a lesion restricted to the subcortical structures.

 

Conclusions

So far, our results are consistent with existing literature to the extent that an RH cortical lesion causes pragmatic deficits. At this stage, we were not able to establish the effects of typical and non-typical handedness due to the lesion distribution in our patient cohort. More data are being collected to ensure an appropriate comparison between the two groups.

Implicit Inferencing deficits in non-fluent variant Primary Progressive Aphasia
PRESENTER: Eleni Peristeri

ABSTRACT. Introduction. Damage to the left inferior frontal gyrus in individuals with non-fluent variant primary progressive aphasia (nfvPPA) has been associated with syntactic comprehension impairment (Mesulam, 2016), as well as verbal working memory (WM) deficits (Eikelboom et al., 2018). Though verbal WM deficits have been shown to be responsible for nfvPPA patients’ poor performance in syntactic comprehension (Sebastian et al., 2014; Thompson & Mack, 2014), no study to date has investigated the role of WM in implicit and explicit inferencing in language. Explicit inferencing mainly relies on language-based cues (lexicon, syntax), while implicit inferencing relies on the integration of syntactic information with contextual and background world knowledge to enable the comprehender to inferentially derive a coherent interpretation of the input and is, therefore, considered a pragmatic function (Rohde & Kurumada, 2018). The current study aims (a) to determine the explicit and implicit inferencing abilities of nfvPPA patients compared to controls, and (b) to investigate whether performance on syntactic comprehension predicts explicit and implicit inferencing to the same extent, and whether executive top-down functions such as verbal WM (as measured by digit-span backwards) mediates these relations.

Method. Fourteen Greek-speaking participants with nfvPPA (age range 53-73; mean age: 64 yrs., SD: 5.1) along with eighteen age- and education-matched language-unimpaired adults performed a listening comprehension task (Cain & Oakhill, 1999) that measured explicit and implicit inferencing (among others). Participants were also assessed on syntactic comprehension (Peristeri & Tsimpli, 2010), and digit-span backwards. Simple linear regression was used to show predictive values for behavioral measures (digit span, and explicit and implicit inferencing scores), and multiple linear regression was used to reveal verbal WM mediation effects.

Results.  The nfvPPA patients performed significantly lower than controls in both explicit (mean nfvPPA=69.1% vs. mean controls=86.6%; p = .011), and implicit language inferencing (mean nfvPPA=43.5% vs. mean controls=96.4%; p < .001). Also, the nfvPPA patients scored lower than controls in syntactic comprehension (max. accuracy score: 16) (mean nfvPPA=9.6 vs. mean controls=15.7; p < .001) and in digit-span backwards (mean nfvPPA=3.9 vs. mean controls=7.1; p < .001). Performance on syntactic comprehension significantly predicted performance on implicit inferencing only. Importantly, syntactic comprehension was not associated with explicit inferencing. To test the mediating role of verbal WM for implicit inferencing, we compared the predictive power of syntactic comprehension with and without digit-span backwards for implicit inferencing performance in a multiple linear regression.  Syntactic comprehension alone had less predictive power of implicit inferencing scores than syntactic comprehension combined with digit-span backwards, thus, suggesting a partial mediation effect of digit-span backwards for implicit inferencing.

Conclusions. The findings indicate that syntactic comprehension deficits are associated with implicit but not explicit inferencing in nfvPPA, showing that the patients’ syntactic comprehension deficit does not impair their understanding of explicit information but contributes to their pragmatic impairments. Importantly, verbal WM mediates the relation between syntactic comprehension and implicit language inferencing in nfvPPA. This pattern indicates that a top-down deficit of nfvPPA in executive functions, such as verbal WM, may partially explain the patients’ pragmatic impairments.

Interpreting indeterminate sentences in aphasia: a probe into semantic coercion
PRESENTER: Caitlyn Antal

ABSTRACT. Sentences such as ‘Mary began the book’ are called indeterminate because they do not make explicit what the subject began doing with the object. These sentences represent a case study for a central issue: compositionality. There are at least two proposals for how the meaning of an indeterminate sentence is attained. One assumes some form of local semantic enrichment relying on internal analyses of the noun complement to yield an enriched composition ([begin the book] -> [begin reading the book]). Another view assumes classical compositionality, with much of the sentence interpretation being the product of pragmatic inferences triggered by a syntactic gap ([began [[the book]]). The only study investigating this phenomenon in aphasia supported semantic coercion based on greater difficulty by Wernicke’s patients in understanding indeterminate sentences. We investigated the phenomenon in a group of 41 healthy controls, and 14 individuals with aphasia from different etiologies, with lesions in the left- or right-hemisphere. Participants heard a sentence, immediately followed by two pictures. Their task was to choose which picture best represented the sentence they heard. Sentences were: (a) indeterminate (The academic began the research), (b) fully determinate (“preferred”: …conducted the research), (c) metaphorical (…dumped the research) or (d) determined but non-preferred (…abandoned the research). Non-fluent [NF] aphasics performed worse with indeterminate sentences, compared to controls. Also, individuals with RH lesions performed worse with indeterminate sentences than controls. Together, the difficulty shown by the NF group in selecting the correct picture when presented with an indeterminate sentence suggests that they have problems computing the syntactic gap that may serve to trigger a search for an appropriate event during semantic composition.

Cognitive and metabolic correlates of single-word and nonword reading in mild Alzheimer’s Disease.
PRESENTER: Valeria Isella

ABSTRACT. Introduction

The Dual Route Cascaded model proposes that reading is accomplished by a left dorsal pathway running from the occipital and occipito-temporal cortex, through the superior parietal lobule, to the dorsal frontal lobe, and a left ventral pathway stemming from the same regions and encompassing the posterior temporal cortex and angular gyrus (Taylor et al., 2013). The former route is specialized in reading nonwords and regular words utilizing grapheme-to-phoneme conversion rules, while the latter route processes familiar regular and irregular words by activating semantic and lexical orthographic representations. With this framework in mind, in the current study we investigated the cognitive and metabolic correlates of the ability to read single words and nonwords in mild dementia of the Alzheimer type (DAT). In fact, DAT patients tend to develop dyslexia as the disease progresses, but the cognitive and neural substrate of reading impairment in this form of dementia is still ill-defined.

 

Methods

We assessed the ability to read words (high-frequency concrete, low-frequency concrete, low-frequency abstract and function words), nonwords and trisyllable words with unpredictable stress position (the major ambiguity in reading Italian) in 25 DAT patients in a mild disease stage, compared with 25 age-, sex- and education-matched healthy participants. Patients reading performance was correlated with scores on an extensive array of cognitive tests. Furthermore, in 21 /25 cases reading scores were correlated with brain FDG-PET using SPM8, with the aim to identify areas of reduced metabolism associated with poor reading.

 

Results

Characteristics of the two study groups are reported in the Table. Independent Student’s t-test and repeated measures ANOVA (with group as between-subject variable and type of reading stimulus as within-subject variable) did not yield any statistically significant intergroup difference or measures interaction. Linear regression analysis with total score on the reading task as dependent variable and various cognitive tests as independent variables identified only the Pyramid and Palm Trees test (Sb= 0.539, p= 0.002) and Letter Span (Sb= 0.448, p= 0.007) as predictors (R2= 0.736, p= 0.000), while there was no significant relationship with measures of attention, episodic memory, language production or comprehension, and visuo-spatial and executive abilities.

Significant clusters of hypometabolism are shown in the Figure. A poorer total score on the reading task was associated with reduced FDG uptake in the left angular and pre-central gyri. The angular cluster emerged as unique correlate for words reading, while nowords reading was associated with hypometabolism in left>right pre-central cortex and in left anterior cingulate cortex.

 

Conclusions

As expected from past evidence, our mild DAT patients showed preserved ability to read words and nonwords. Results of behavioral and metabolic imaging analyses converged in highlighting that such an ability was sustained by an anatomo-functional system that involves semantic processing, mapped to the angular gyrus, phonological processing, mapped to the posterior frontal cortex, and attentional processes, mapped to anterior cingulate cortex. These brain regions have all already been reported as crucial for reading in prior neuroimaging studies and meta-analyses (Martin et al., 2015; Taylor et al., 2013; Vogel et al., 2013).