Explainable Artificial Intelligence (XAI) makes AI understandable to the human user particularly when the
model is complex and opaque. Local Interpretable Model-agnostic Explanations (LIME) has an image explainer
package that is used to explain deep learning models. The image explainer of LIME needs some parameters to
be manually tuned by the expert in advance, including the number of top features to be seen and the number
of superpixels in the segmented input image. This parameter tuning is a time-consuming task. Hence, with the
aim of developing an image explainer that automizes image segmentation, this paper proposes Ensemblebased Genetic Algorithm Explainer (EGAE) for melanoma cancer detection that automatically detects and
presents the informative sections of the image to the user. EGAE has three phases. First, the sparsity of
chromosomes in GAs is determined heuristically. Then, multiple GAs are executed consecutively. However,
the difference between these GAs are in different number of superpixels in the input image that result in
different chromosome lengths. Finally, the results of GAs are ensembled using consensus and majority votings.
This paper also introduces how Euclidean distance can be used to calculate the distance between the actual
explanation (delineated by experts) and the calculated explanation (computed by the explainer) for accuracy
measurement. Experimental results on a melanoma dataset show that EGAE automatically detects informative
lesions, and it also improves the accuracy of explanation in comparison with LIME efficiently. The python
codes for EGAE, the ground truths delineated by clinicians, and the melanoma detection dataset are available
at https://github.com/KhaosResearch/EGAE