Mostrar el registro sencillo del ítem
Measuring the Quality of Machine Learning and Optimization Frameworks
dc.contributor.author | Villalobos Salas, Ignacio Javier | |
dc.contributor.author | Ferrer-Urbano, Francisco Javier | |
dc.contributor.author | Alba-Torres, Enrique | |
dc.date.accessioned | 2018-11-26T10:39:14Z | |
dc.date.available | 2018-11-26T10:39:14Z | |
dc.date.created | 2018 | |
dc.date.issued | 2018-11-26 | |
dc.identifier.uri | https://hdl.handle.net/10630/16951 | |
dc.description.abstract | Software frameworks are daily and extensively used in research, both for fundamental studies and applications. Researchers usually trust in the quality of these frameworks without any evidence that they are correctly build, indeed they could contain some defects that potentially could affect to thousands of already published and future papers. Considering the important role of these frameworks in the current state-of-the-art in research, their quality should be quantified to show the weaknesses and strengths of each software package. In this paper we study the main static quality properties, defined in the product quality model proposed by the ISO 25010 standard, of ten well-known frameworks. We provide a quality rating for each characteristic depending on the severity of the issues detected in the analysis. In addition, we propose an overall quality rating of 12 levels (ranging from A+ to D-) considering the ratings of all characteristics. As a result, we have data evidence to claim that the analysed frameworks are not in a good shape, because the best overall rating is just a C+ for Mahout framework, i.e., all packages need to go for a revision in the analysed features. Focusing on the characteristics individually, maintainability is by far the one which needs the biggest effort to fix the found defects. On the other hand, performance obtains the best average rating, a result which conforms to our expectations because frameworks’ authors used to take care about how fast their software runs. | en_US |
dc.description.sponsorship | University of Malaga. Campus de Excelencia Internacional Andalucía Tech. We would like to say thank you to all authors of these frameworks that make research easier for all of us. This research has been partially funded by CELTIC C2017/2-2 in collaboration with companies EMERGYA and SECMOTIC with contracts #8.06/5.47.4997 and #8.06/5.47.4996. It has also been funded by the Spanish Ministry of Science and Innovation and /Junta de Andalucı́a/FEDER under contracts TIN2014-57341-R and TIN2017-88213-R, the network of smart cities CI-RTI (TIN2016-81766-REDT) | en_US |
dc.language.iso | eng | en_US |
dc.rights | info:eu-repo/semantics/openAccess | en_US |
dc.subject | Inteligencia artificial | en_US |
dc.subject.other | Maintainability | en_US |
dc.subject.other | Reliability | en_US |
dc.subject.other | Performance | en_US |
dc.subject.other | Quality | en_US |
dc.subject.other | Security | en_US |
dc.title | Measuring the Quality of Machine Learning and Optimization Frameworks | en_US |
dc.type | info:eu-repo/semantics/conferenceObject | en_US |
dc.centro | E.T.S.I. Informática | en_US |
dc.relation.eventtitle | Conference of the Spanish Association for Artificial Intelligence (CAEPIA) | en_US |
dc.relation.eventplace | Granada, España | en_US |
dc.relation.eventdate | 23-26 de octubre de 2018 | en_US |