Journal Information
Vol. 79. Issue 3.
Pages 336-341 (May - June 2013)
Share
Share
Download PDF
More article options
Visits
5488
Vol. 79. Issue 3.
Pages 336-341 (May - June 2013)
Original Article
Open Access
The influence of speech stimuli contrast in cortical auditory evoked potentials
Visits
5488
Kátia de Freitas Alvarengaa,
Corresponding author
katialv@fob.usp.br

Send correspondence to: Kátia de Freitas Alvarenga. Al. Dr. Octávio Pinheiro Brisola, nº 9-75. Bauru - SP. Brazil. CEP: 17012-901
, Leticia Cristina Vicenteb, Raquel Caroline Ferreira Lopesb, Rubem Abrão da Silvac, Marcos Roberto Banharad, Andréa Cintra Lopese, Lilian Cássia Bornia Jacob-Cortelettie
a PhD, Associate Professor - University of São Paulo; Associate Professor - Department of Speech and Hearing Therapy - School of Dentistry - University of São Paulo, Bauru campus, Brazil.
b Speech and Hearing Therapist; MSc student in Sciences of the Processes and Communication Disorders - School of Dentistry of Bauru; University of São Paulo, Bauru campus, Brazil.
c Speech and Hearing Therapist, Specialist in Family and Community Health - Federal University of São Carlos, São Carlos, São Paulo, Brazil).
d PhD; Professor. Speech and Hearing Therapist - Cochlear Implant and Cleft Lip and Palate Program - Santo Antônio Hospital- Irmã Dulce social works, Salvador, Bahia, Brazil.
e PhD, Associate Professor - University of São Paulo; Associate Professor - Department of Speech and Hearing Therapy - School of Dentistry - University of São Paulo, Bauru campus, Brazil. University of São Paulo.
This item has received

Under a Creative Commons license
Article information
Abstract
Full Text
Bibliography
Download PDF
Statistics
Figures (1)
Tables (6)
Table 1. Mean values of the N1, P2, N2 e P3 component latencies (milliseconds) and amplitude values (μV) from the P3 component in adults.
Table 2. Parameters utilized in the study of cortical evoked potentials and the P3 cognitive auditory potential.
Table 3. Record occurrence (%) of components N1, P2, N2 and P3 considering the 7-10 years; 11-20 years and 21-30 years age ranges.
Table 4. Descriptive analysis (mean, standard deviation, maximum and minimum values) of the N1, P2, N2 e P3 component latency values in milliseconds and e P3 component amplitude (μV) recorded in the Fz e Cz channels.
Table 5. Study of the association between the channel type and stimulus factors and the N1, P2, N2 e P3 component latency variables and the P3 component amplitude.
Table 6. List of the N2 e P3 component latency values considering the type of stimulus (consonant-vowel) and the amplitude and latency values of the P3 component with the Fz-Cz channel.
Show moreShow less
Abstract

Studies about cortical auditory evoked potentials using the speech stimuli in normal hearing individuals are important for understanding how the complexity of the stimulus influences the characteristics of the cortical potential generated.

Objective

To characterize the cortical auditory evoked potential and the P3 auditory cognitive potential with the vocalic and consonantal contrast stimuli in normally hearing individuals.

Method

31 individuals with no risk for hearing, neurologic and language alterations, in the age range between 7 and 30 years, participated in this study. The cortical auditory evoked potentials and the P3 auditory cognitive one were recorded in the Fz and Cz active channels using consonantal (/ba/-/da/) and vocalic (/i/-/a/) speech contrasts. Design: A cross-sectional prospective cohort study.

Results

We found a statistically significant difference between the speech contrast used and the latencies of the N2 (p = 0.00) and P3 (p = 0.00) components, as well as between the active channel considered (Fz/Cz) and the P3 latency and amplitude values. These correlations did not occur for the exogenous components N1 and P2.

Conclusion

The speech stimulus contrast, vocalic or consonantal, must be taken into account in the analysis of the cortical auditory evoked potential, N2 component, and auditory cognitive P3 potential.

Keywords:
audiology
auditory pathways
electrophysiology
event-related potentials, P300
evoked potentials
auditory
Full Text
INTRODUCTION

The study of the P3 auditory cognitive evoked potential, enables the assessment of the neurophysiological cognitive processes which happen in the cerebral cortex, such as memory and auditory attention1. Since this is an objective method, its clinical applicability has been shown in different neurological and mental conditions, alterations in hearing, language, learning and others2-6.

Two auditory stimuli are utilized in the oddball paradigm, one rare and one that is frequent; they have a contrast between each other and are built based on frequency, intensity, meaning or category. Using two recording channels, it is possible to observe the N1, P2 e N2 cortical potentials for the frequent stimuli, and the P3component for the rare stimulus. The number used to name these components pertains to the order of occurrence in which these potentials are recorded, and the letters are used to characterize positive (P) and negative (N) peaks. It is important to stress that the P3 is considered a cognitive potential different from the others, since it corresponds to the electrical activity which happens in the auditory system when there is discrimination of the rare stimulus among the frequencies.

Studies have characterized the P3 component as to latency and amplitude as it is evoked by pure tones in individuals who can hear. However, the acoustic signal processing happens in a very different way vis-à-vis verbal and non-verbal sounds7-10, and it is very difficult to generalize auditory processing information of a simple stimulus and a more complex one, like speech11.

The P3 cognitive auditory evoked potential generated by speech has been utilized to provide speech signal processing information when the behavioral assessment is not an accurate method, besides helping to pinpoint detection or discrimination alterations, and such information may guide the therapeutic rehabilitation of the individual12.

Thus, studies involving auditory evoked potentials with speech stimuli are important in order to understand how the stimulus complexity influences the characteristics of the potential generated, such as latency and amplitude. Table 1 depicts the latency values from the P3 cognitive and cortical auditory evoked potential latency values, as well as the amplitude values as evoked by speech (syllables) stimuli in adults with normal hearing.

Table 1.

Mean values of the N1, P2, N2 e P3 component latencies (milliseconds) and amplitude values (μV) from the P3 component in adults.

Study  N1  P2  N2  P3  P3 amp. 
Sharma et al.13  117.0 (± 4) 
Tampas et al.14  398.9  0.025 
Gilley et al.15  108.0 (± 16)  176.0 (± 14) 
Garinis & Cone-Wesson16  40 dBSL: 110 ms  40 dBSL: 200 ms  40 dBSL /sa/: 355 /da/: 345  5.67 (± 4.71) 
Massa et al.17  348.95(± 29.69)  6.61(2.76) 
Bennett et al.18  363(± 7.7)  4.7 (± 0.6) 

amp.: amplitude.

The goal of the present paper was to characterize cortical auditory evoked potentials and the P3 cognitive auditory potentials from speech stimulus with vocalic and consonantal contrasts in normal hearing individuals.

METHOD

This is a cross-sectional and prospective study carried out with the approval of the Ethics Committee, process # 069/2003. All the individuals assessed, or their guardians, signed the Informed Consent Form prior to being submitted to the exam.

We assessed 31 normal hearing individuals, without past disorders putting them in risk of developing auditory, neurological and language disorders, within the age range between 7 and 30 years, 13 females and 18 males.

The lack of hearing loss was proven by the auditory threshold of ≤ 25 dBHL upon threshold tonal audiometry, 92% scores for monosyllable words in the speech recognition index (SRI), type A tympanometry curve and acoustic reflex between 70 and 90 dBSL. We used the 622 Madsen audiometer®, with TDH-39 headphones, calibrated in the ANSI-69 standard and the Interacoustics AZ7® immittance audiometer.

During the test, the individuals remained lying down in a gurney, in the dorsal position, and were instructed to keep their eyes as fixed as possible in order to reduce the artifact caused by eye movement. As we identified the rare stimulus among the frequent ones, the individuals were instructed to perform a simple motor action (raise the hand).

The simultaneous recording of the N1/P2 e N2/P3complexes in channels Fz and Cz was considered as a criterion to define the presence of cortical auditory evoked potentials and the P3 cognitive auditory potential. We used the Biologic's Evoked Potential System® (EP) with the parameters described on Table 2.

Table 2.

Parameters utilized in the study of cortical evoked potentials and the P3 cognitive auditory potential.

Assessment parameters
Type of stimulus  Speech stimulus (80% frequent and 20% rare) 
Stimulus frequency  Vowel contrast: /i/ (frequent); /a/ (rare) Consonant contrast: /ba/ (frequent); /da/ (rare) 
Stimulus presentation rate  1 stimulus per second 
Electrode positioning  Fz and Cz (active); A1/A2 (reference) 
Pre-amplifer  Channels 1 and 2: input 1 - active electrodes; input 2 - reference electrodes (jumper) 
Impedance  ≤ 5 kΩ (individual); ≤ 2 kΩ (between electrodes) 
Band-pass flter  1-25 Hz 
Window  520 ms 
Gain  75000 
Intensity  70 dBHL, binaural stimulation 
Transducer  3rd insertion phone 

The speech sample was collected in an acoustically treated room inside a lab. The emissions were recorded by means of a unidirectional microphone, directly on the computer board, through the Praat® (www.praat.org) free software, with 22 kHz sampling. We asked the speaker (22 year-old male with a fluid voice quality) to utter the emissions naturally. In the beginning, we worked on the contrast by means of the /ba/-/da/ articulation point. By the spectral and temporal definition, the /ba/ was setup as a frequent stimulus, and the /da/ as the rare one. The [ba] and [da] syllables were taken from uttering the words [ba’ba] and [da’da], respectively, corresponding to the second syllable. From the isolated syllable, we found the F1, F2and F3 values in their initial and stable portions. With the bandwidth values of the forming frequencies stable regions we compiled a Praat script (version 4.2.31) and we resynthesized each syllable. The duration of the [ba] and [da] syllables was 180 ms. The /i/-/a/ meeting of vowels was established by the frequencies from formants F1 and F2 and by a shorter F3 extension. Vowels [a] and [i] were taken from the isolated utterance of syllables [pa] and [pi], respectively. In each syllable of the vowel region, we collected two glottic cycles with spectral stability, and in the Matlab® (version 6.0.0.88), we replicated these cycles so as to correspond to the 150 ms vowel utterance. The vowels were created in the Praat® with a script similar to what was previously described for the syllables. The linguistic stimuli which were previously produced, handled and recorded in a CD by the Lab were digitalized and inserted in the unit C of the computer connected to the software of the Biologic's Evoked Potential System® (EP). The stimulus order and level of presentation were randomly handled by the aforementioned software.

In order to assess the results, we considered the absolute latency of the cortical auditory evoked potentials, N1, P2 and N2 components and P3 cognitive auditory, as well as the P3 component amplitude, obtained from channels Fz and Cz.

We compared the means among the types of channel and stimuli and the variable factors (amplitude and latency) utilizing a variance analysis model with repeated measures with two factors, ANOVA.

RESULTS

Figure 1 depicts an example of the recording obtained from studying the cortical auditory evoked potential and the P3 cognitive auditory potential in the Fz and Cz channels.

Figure 1.

Record obtained in the study of the cortical auditory evoked potential and the P3 auditory evoked potential from a female individual with 29 years of age.

(0.1MB).

Upon investigating the occurrence of the records from N1, P2, N2 and P3 components, considering sample breaking down into the age ranges: 7-10 years; 11-20 years; 21-30 years, we can see the age influence on the recordings of components N1 and P2 (Table 3).

Table 3.

Record occurrence (%) of components N1, P2, N2 and P3 considering the 7-10 years; 11-20 years and 21-30 years age ranges.

Age range (years)  N1  P2  N2  P3 
7-10 (n = 9)  22.22%  66.66%  100%  77.77% 
11-20 (n = 10)  90%  80%  100%  100% 
21-30 (n = 12)  100%  100%  83.33%  100% 

Table 4 depicts the descriptive analysis (mean, standard deviation, maximum and minimum values) of the N1, P2, N2 and P3 component latencies and P3 component amplitude, recorded from channels Fz and Cz, for all the individuals.

Table 4.

Descriptive analysis (mean, standard deviation, maximum and minimum values) of the N1, P2, N2 e P3 component latency values in milliseconds and e P3 component amplitude (μV) recorded in the Fz e Cz channels.

    FzCz
    SD  Minimum  Maximum  SD  Minimum  Maximum 
N1104  40  66  197  105  42  45  197 
106  17  75  139  103  33  50  170 
P2191  49  126  255  189  48  124  262 
186  35  117  240  179  36  99  230 
N2274  40  195  361  278  41  205  379 
236  38  153  289  239  27  182  278 
P3388  60  243  493  403  54  307  493 
322  39  226  376  339  44  249  447 
P3 amp.15  18 
10  23  14 

X: Mean; SD: Standard deviation; amp.: Amplitude; C: Consonant; V: Vowel.

Our analysis of the association between the frequencies of components N1, P2, N2 and P3 and the P3component amplitude with the type of channel and the stimulus utilized did not show differences for the latency values of components N1 and P2. There was also a difference between the active channels (Fz and Cz) considered in the recording of the P3 component (Table 5).

Table 5.

Study of the association between the channel type and stimulus factors and the N1, P2, N2 e P3 component latency variables and the P3 component amplitude.

Variation sourceN1P2N2P3P3 amp.
p  p  p  p  p 
Stimulus  0.11  0.74  1.10  0.30  16.26  < 0.01*  82.58  < 0.01*  0.01  0.90 
Channel  0.04  0.82  0.99  0.33  0.47  0.49  10.95  < 0.01*  6.87  0.01* 
Stimulus channel*  0.23  0.63  1.00  0.32  0.13  0.72  0.09  0.75  1.67  0.20 
*

Significant values (p ≤ 0.05) - ANOVA. amp.: Amplitude.

Table 6 depicts the Tukey Post-Hoc comparisons, considering the type of stimulus (consonant-vowel) for the latency of components N2 and P3 and considering the type of channel (Fz-Cz) for the amplitude and latency of the P3 component.

Table 6.

List of the N2 e P3 component latency values considering the type of stimulus (consonant-vowel) and the amplitude and latency values of the P3 component with the Fz-Cz channel.

  Mean differenceStandard errortp95% confidence interval
  Stimulus  Channel  Lower limit  Upper limit 
Amplitude P3  2.20  0.84  2.62  0.01*  0.47  3.94 
Latency P3  -19.52  5.89  -3.31  0.01*  -31.63  -73.68 
Latency N2  36.36  9.01  4.03  < 0.01*  17.61  55.11 
Latency P3  66.86  7.35  9.08  < 0.01*  51.71  82.01 
*

Significant values (p ≤ 0.05) - Tukey's Post-Hoc comparisons.

DISCUSSION

In the present investigation, it was possible to obtain the recordings of the cortical auditory evoked potentials and P3 cognitive auditory potential from a speech stimulus, with good reproducibility and morphology, showing that it is a viable procedure to be employed in clinical practice (Figure 1).

Analyzing the occurrence of recording from the N1and P2 exogenous components, it was possible to notice that their presence increased with age. The N1 component was practically nonexistent in the age range of 7-10 years corroborating the literature which states that, depending on the stimulus presentation characteristics, its recording can only be obtained as of 16 years of age, approximately19. Considering that the P2 component can also be influenced by the age range20, these data show the maturation process of the structures involved in the recording of the cortical auditory evoked potential.

Nonetheless, the age range did not influence the occurrence of recordings in N2 and P3 components, which are more frequently found than the N1 and P2components in children21. The gender variable was not analyzed, because in a study we did before we showed that there are no significant differences between males and females when we investigate the P3 auditory cognitive potential22.

In investigating the cortical auditory evoked potentials, we noticed that the N1 and P2 exogenous component latencies did not depict significant differences upon considering the Fz/Cz channel and the type of stimulus utilized (/a/-/i/; /ba/-/da/). Nevertheless, for the P3 cognitive auditory potential, the channel type was a factor which influenced its latency and amplitude, as per previously reported in other studies22,23. By the same token, the type of stimulus used was an important variable in the attainment of N2 and P3 components.

The N2 component recording seems to be associated with the identification, processing and attention to the rare stimulus, with a positive correlation between the value of its latency and the level of difficulty in the discrimination task24. In our study, there was an influence of the speech stimulus on the N2 component, with higher latency values for the consonant contrast, suggesting that the degree of difficulty in the discrimination of such contrast is higher than the one found in the meeting of vowels. A similar finding was observed for the P3 component upon comparing verbal and non-verbal stimuli and in situations of difficult discrimination14,17,18,25, reinforcing the hypothesis that this task is more difficult26.

However, this finding can also be explained by the evidence that vowels and consonants are processed in different ways by the central auditory system. One study carried out in rats27 compared discrimination behavioral responses from vowels and consonants with the neural recording from the inferior colliculus and primary auditory cortex, and suggested that consonants and vowels have different representations in the brain. In humans, studies have also reported differences in the activation of central auditory system structures during the discrimination of vowels and consonants28,29. Therefore, the type of speech contrast used may reflect differently on the latency of the N2 and P3 components.

Some studies describe the reduction in the P3 component amplitude with the increase in the task's level of discrimination difficulty14,17,18,25,26. Nonetheless, this correlation was not significant in the present study.

In our series, the normal latency values for the N1, P2, N2 and P3 components for the vowel and consonant contrasts are depicted on Table 4. The comparative discussion between the values found and results from previous studies is inaccurate, because the methodologies are different, and as per shown above, assessment parameters such as type of stimulus utilized, have a significant influence on the latency values of auditory evoked potentials.

Considering that different neural structures are activated during the perception of verbal and non-verbal sounds, we stress the importance of using speech stimuli in future studies with the cortical auditory evoked potentials and the P3 cognitive auditory potential.

CONCLUSION

The consonant or vowel-related speech stimulus, must be considered in the analysis of the N2 component of the cortical auditory evoked potentials and the P3 cognitive auditory potential. This was not observed for the N1 and P2 components.

ACKNOWLEDGEMENTS

We would like to thank the Institutional Program of Scientific Initiation Scholarships (PIBIC) from the National Council for Scientific and Technological Development (CNPq) for their support in this study, under process # 110767/2005-5.

REFERENCES
[1]
Sousa LCA , Piza MRT , Alvarenga KF , Cóser PL .
Potenciais Auditivos Evocados Corticais Relacionados a Eventos (P300).
Eletrofisiologia da audição e emissões otoacústicas, 2ª, pp. 95-107
[2]
Ethridge LE , Hamm JP , Shapiro JR , Summerfelt AT , Keedy SK , Stevens MC , et al.
Neural activations during auditory oddball processing discriminating schizophrenia and psychotic bipolar disorder.
Biol Psychiatry., 72 (2012), pp. 766-774
[3]
Gao Y , Raine A , Schug RA .
P3 event-related potentials and childhood maltreatment in successful and unsuccessful psychopaths.
Brain Cogn., 77 (2011), pp. 176-182
[4]
Reis AC , Iório MC .
P300 in subjects with hearing loss.
Pró-Fono., 19 (2007), pp. 113-122
[5]
Weber-Fox C , Leonard LB , Wray AH , Tomblin JB .
Electrophysiological correlates of rapid auditory and linguistic processing in adolescents with specific language impairment.
Brain Lang., 115 (2010), pp. 162-181
[6]
Wiemes GR , Kozlowski L , Mocellin M , Hamerschmidt R , Schuch LH .
Cognitive evoked potentials and central auditory processing in children with reading and writing disorders.
Braz J Otorhinolaryngol., 78 (2012), pp. 91-97
[7]
Uppenkamp S , Johnsrude IS , Norris D , Marslen-Wilson W , Patterson RD .
Locating the initial stages of speech-sound processing in human temporal cortex.
Neuroimage., 31 (2006), pp. 1284-1296
[8]
Liebenthal E , Binder JR , Spitzer SM , Possing ET , Medler DA .
Neural substrates of phonemic perception.
Cereb Cortex., 15 (2005), pp. 1621-1631
[9]
Husain FT , Fromm SJ , Pursley RH , Hosey LA , Braun AR , Horwitz B .
Neural bases of categorization of simple speech and nonspeech sounds.
Hum Brain Mapp., 27 (2006), pp. 636-651
[10]
Samson F , Zeffiro TA , Toussaint A , Belin P .
Stimulus complexity and categorical effects in human auditory cortex: an activation likelihood estimation meta-analysis.
Front Psychol., 1 (2010), pp. 241
[11]
Henkin Y , Kileny PR , Hildesheimer M , Kishon-Rabin L .
Phonetic processing in children with cochlear implants: an auditory event-related potentials study.
Ear Hear., 29 (2008), pp. 239-249
[12]
Martin BA , Tremblay KL , Korczak P .
Speech evoked potentials: from the laboratory to the clinic.
Ear Hear., 29 (2008), pp. 285-313
[13]
Sharma A , Kraus N , McGee TJ , Nicol TG .
Developmental changes in P1 and N1 central auditory responses elicited by consonant-vowel syllables.
Electroencephalogr Clin Neurophysiol., 104 (1997), pp. 540-545
[14]
Tampas JW , Harkrider AW , Hedrick MS .
Neurophysiological indices of speech and nonspeech stimulus processing.
J Speech Lang Hear Res., 48 (2005), pp. 1147-1164
[15]
Gilley PM , Sharma A , Dorman M , Martin K .
Developmental changes in refractoriness of the cortical auditory evoked potential.
Clin Neurophysiol., 116 (2005), pp. 648-657
[16]
Garinis AC , Cone-Wesson BK .
Effects of stimulus level on cortical auditory event-related potentials evoked by speech.
J Am Acad Audiol., 18 (2007), pp. 107-116
[17]
Massa CG , Rabelo CM , Matas CG , Schochat E , Samelli AG .
P300 with verbal and nonverbal stimuli in normal hearing adults.
Braz J Otorhinolaryngol., 77 (2011), pp. 686-690
[18]
Bennett KO , Billings CJ , Molis MR , Leek MR .
Neural encoding and perception of speech signals in informational masking.
Ear Hear., 33 (2012), pp. 231-238
[19]
Sussman E , Steinschneider M , Gumenyuk V , Grushko J , Lawson K .
The maturation of human evoked brain potentials to sounds presented at different stimulus rates.
Hear Res., 236 (2008), pp. 61-79
[20]
Wunderlich JL , Cone-Wesson BK , Shepherd R .
Maturation of the cortical auditory evoked potential in infants and young children.
Hear Res., 212 (2006), pp. 185-202
[21]
Martin L , Barajas JJ , Fernandez R , Torres E .
Auditory event-related potentials in well-characterized groups of children.
Electroencephalogr Clin Neurophysiol., 71 (1988), pp. 375-381
[22]
Duarte JL , Alvarenga Kde F , Banhara MR , Melo AD , Sás RM , Costa Filho OA .
P300-long-latency auditory evoked potential in normal hearing subjects: simultaneous recording value in Fz and Cz.
Braz J Otorhinolaryngol., 75 (2009), pp. 231-236
[23]
Franco GM .
The cognitive potential in normal adults.
Arq Neuropsiquiatr., 59 (2001), pp. 198-200
[24]
Novak GP , Ritter W , Vaughan HG Jr , Wiznitzer ML .
Differentiation of negative event-related potentials in an auditory discrimination task.
Electroencephalogr Clin Neurophysiol., 75 (1990), pp. 255-275
[25]
Geal-Dor M , Kamenir Y , Babkoff H .
Event related potentials (ERPs) and behavioral responses: comparison of tonal stimuli to speech stimuli in phonological and semantic tasks.
J Basic Clin Physiol Pharmacol., 16 (2005), pp. 139-155
[26]
Beynon AJ , Snik AF , Stegeman DF , van den Broek P .
Discrimination of speech sound contrasts determined with behavioral tests and event-related potentials in cochlear implant recipientes.
J Am Acad Audiol., 16 (2005), pp. 42-53
[27]
Perez CA , Engineer CT , Jakkamsetti V , Carraway RS , Perry MS , Kilgard MP .
Different timescales for the neural coding of consonant and vowel sounds.
Cereb Cortex., 23 (2013), pp. 670-683
[28]
Jäncke L , Wüstenberg T , Scheich H , Heinze HJ .
Phonetic perception and the temporal cortex.
Neuroimage., 15 (2002), pp. 733-746
[29]
Joanisse MF , Gati JS .
Overlapping neural regions for processing rapid temporal cues in speech and nonspeech signals.
Neuroimage., 19 (2003), pp. 64-79

Institutional Program of Scientific Initiation Scholarships (PIBIC) - National Council for Science and Technology Development (CNPq), under individual process: 110767/2005-5.

Paper submitted to the BJORL-SGP (Publishing Management System - Brazilian Journal of Otorhinolaryngology) on October 5, 2012; and accepted on January 19, 2013. cod. 10506.

Copyright © 2013. Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial
Brazilian Journal of Otorhinolaryngology (English Edition)
Article options
Tools
Announcement Nota importante
Articles submitted as of May 1, 2022, which are accepted for publication will be subject to a fee (Article Publishing Charge, APC) payment by the author or research funder to cover the costs associated with publication. By submitting the manuscript to this journal, the authors agree to these terms. All manuscripts must be submitted in English.. Os artigos submetidos a partir de 1º de maio de 2022, que forem aceitos para publicação estarão sujeitos a uma taxa (Article Publishing Charge, APC) a ser paga pelo autor para cobrir os custos associados à publicação. Ao submeterem o manuscrito a esta revista, os autores concordam com esses termos. Todos os manuscritos devem ser submetidos em inglês.