A generic LSTM neural network architecture to infer heterogeneous model transformations.

dc.contributor.authorBurgueño-Caballero, Lola
dc.contributor.authorCabot, Jordi
dc.contributor.authorLi, Shuai
dc.contributor.authorGérard, Sébastien
dc.date.accessioned2023-09-22T06:36:55Z
dc.date.available2023-09-22T06:36:55Z
dc.date.issued2023
dc.departamentoInstituto de Tecnología e Ingeniería del Software de la Universidad de Málaga
dc.description.abstractModels capture relevant properties of systems. During the models’ life-cycle, they are subjected to manipulations with different goals such as managing software evolution, performing analysis, increasing developers’ productivity, and reducing human errors. Typically, these manipulation operations are implemented as model transformations. Examples of these transformations are (i) model-to-model transformations for model evolution, model refactoring, model merging, model migration, model refinement, etc., (ii) model-to-text transformations for code generation and (iii) text-to-model ones for reverse engineering. These operations are usually manually implemented, using general-purpose languages such as Java, or domain-specific languages (DSLs) such as ATL or Acceleo. Even when using such DSLs, transformations are still time-consuming and error-prone. We propose using the advances in artificial intelligence techniques to learn these manipulation operations on models and automate the process, freeing the developer from building specific pieces of code. In particular, our proposal is a generic neural network architecture suitable for heterogeneous model transformations. Our architecture comprises an encoder–decoder long short-term memory with an attention mechanism. It is fed with pairs of input–output examples and, once trained, given an input, automatically produces the expected output. We present the architecture and illustrate the feasibility and potential of our approach through its application in two main operations on models: model-to-model transformations and code generation. The results confirm that neural networks are able to faithfully learn how to perform these tasks as long as enough data are provided and no contradictory examples are given.es_ES
dc.description.sponsorshipUniversidad de Málaga. Campus de Excelencia Internacional Andalucía Tech.es_ES
dc.identifier.urihttps://hdl.handle.net/10630/27636
dc.language.isoenges_ES
dc.relation.eventdate12/09/2023-14/09/2023es_ES
dc.relation.eventplaceCiudad Real, Españaes_ES
dc.relation.eventtitleXXVII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2023)es_ES
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internacional*
dc.rights.accessRightsopen accesses_ES
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectArquitectura de ordenadoreses_ES
dc.subjectRedes neuronales (Informática)es_ES
dc.subject.otherModel manipulationes_ES
dc.subject.otherCode generationes_ES
dc.subject.otherModel transformationes_ES
dc.subject.otherArtificial intelligencees_ES
dc.subject.otherMachine learninges_ES
dc.subject.otherNeural networkses_ES
dc.titleA generic LSTM neural network architecture to infer heterogeneous model transformations.es_ES
dc.typeconference outputes_ES
dspace.entity.typePublication
relation.isAuthorOfPublication31808e70-d2ec-4318-8ead-dded38954d40
relation.isAuthorOfPublication.latestForDiscovery31808e70-d2ec-4318-8ead-dded38954d40

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
JISBD2023 .pdf
Size:
156 KB
Format:
Adobe Portable Document Format
Description: