EEG Database for musical genres detection ============================================================= Version 1.0 Copyright©2025, ATIC Research Group, Universidad de Malaga Authors: Isaac Ariza (iariza@ic.uma.es), Ana M. Barbancho (abp@uma.es), Lorenzo J. Tardón (ltg@uma.es), Isabel Barbancho (ibp@uma.es) Date: 12/01/2025 This database is made up of EEG signals from 6 subjects listening to fragments of songs from different musical genres and their answers to the questions: did you know this song and do you like this song?. The musical genres chosen are: ballad, classic, metal and reggaeton. These signals have been captured with the BrainVision actiCHAMP-PLUS system and consist of a total of 64 EEG channels. The BrainVision Recorder software was used to store the signals. The stimulus presentation software used to design the experiment is Eprime 3. For more detailed information on this database, the capture system used and its applications, see [1]. If these data are used for any publication, the following paper must be cited: [1] Isaac Ariza, Ana M. Barbancho, Lorenzo J. Tardón, Isabel Barbancho, Energy-based features and bi-LSTM neural network for EEG-based music and voice classification. Neural Comput & Applic 36, 791–802 (2024). https://doi.org/10.1007/s00521-023-09061-3 Folder included ----------------------------------- ./Raw_Data In this folder, the original raw data of each subject performing the experiment can be found. For each subject there are four different files: - S_00XX_E_0007_0001.eeg: contains the EEG signals. - S_00XX_E_0007_0001.vhdr: is the header file, which has information about the EEG signals such as the sampling rate and the impedance level of the electrodes. - S_00XX_E_0007_0001.vmrk: contains information about the events (triggers) occurring in the data. - S_00XX_E_0007_0001.txt: is generated by the stimulus presentation program running the experiment (Eprime 3) and collects the responses of the subjects. In these files, XX indicates the number of the subject carrying out the experiment. Table 1 shows the age and gender of each subject. Table 1. Subjects participating in the experiments: Subject # Age Gender 0009 26 Male 0010 20 Female 0013 20 Female 0014 21 Female 0015 22 Male 0017 25 Female The *.txt files generated by Eprime contain the information of the fragment of the song being listened to at each moment (StimVal), and the subject's answer to two questions: do you know this song? (SlideConocer.RESP), and do you like this song? (SlideGustar.RESP). In the EEG data, the onset time of the presentation of each stimulus is marked with its StimVal. The SlideConocer.RESP field can have three values: · 1: know this song. · 2: know this song a bit. · 3: don´t know this song. The SlideGustar.RESP field can have three different values: · 1: like. · 2: like a bit. · 3: don´t like. Table 2 shows the StimVal value that corresponds to each song: StimVal Genre Song Singer 1 Classical classical.00011 [2] 2 Classical classical.00089 [2] 3 Classical classical.00094 [2] 4 Classical classical.00096 [2] 5 Metal metal.00007 [2] 6 Metal metal.00046 [2] 7 Metal metal.00070 [2] 8 Metal metal.00092 [2] 9 Reggaeton Despacito Luis Fonsi 10 Reggaeton Gasolina Daddy Yankee 11 Reggaeton La gozadera Gente de Zona 12 Reggaeton Danza Kuduro Don Omar 13 Ballad Una Estrella en mi jardín Mari Trini 14 Ballad Libre Nino Bravo 15 Ballad I will always love you Whitney Houston 16 Ballad Yesterday The Beatles References ----------------------------------- [1] Isaac Ariza, Ana M. Barbancho, Lorenzo J. Tardón, Isabel Barbancho, Energy-based features and bi-LSTM neural network for EEG-based music and voice classification. Neural Comput & Applic 36, 791–802 (2024). https://doi.org/10.1007/s00521-023-09061-3 [2] Tzanetakis G, Essl G, Cook P (2001) Automatic musical genre classification of audio signals. In: Proceedings of the 2nd international symposium on music information retrieval, Indiana, vol 144. http://ismir2001.ismir.net/pdf/tzanetakis.pdf