Any ETL (Extract, Transform and Load) system that deals with a meaningful dataset will always need to circumvent any phenomena of data omission and inconsistency. In most cases this results from an ineffective implementation of the associated operational systems. If the ETL system acts contrarily, the system may lose utility. For these situations being a rarity, it is necessary, from an embryonic stage of the process, to identify, characterize and solve potential bottlenecks in system execution. In this work, we report the process that we developed to determine the quality of the execution of an ETL system, identifying and characterizing eventual bottlenecks - black spots - and from that point on, generate a quality index of performance that presents us with the "well-being" of the system and provides us with information for solving any black spots and consequently improving the system's quality of service.