Please use this identifier to cite or link to this item: http://ir.futminna.edu.ng:8080/jspui/handle/123456789/3423
Full metadata record
DC FieldValueLanguage
dc.contributor.authorAnda, Ilyasu-
dc.contributor.authorIsah, Omeiza Rabiu-
dc.contributor.authorAminu, Enesi Femi-
dc.contributor.authorZubairu, Hussaini Abubakar-
dc.date.accessioned2021-06-16T16:01:09Z-
dc.date.available2021-06-16T16:01:09Z-
dc.date.issued2018-06-
dc.identifier.issn2006-1781-
dc.identifier.urihttp://repository.futminna.edu.ng:8080/jspui/handle/123456789/3423-
dc.description.abstractSome datasets are prone to risk and hazards, inadvertently affecting the integrity of the data with attendant errors in result interpretation and usage which could sometime escalate to disastrous levels. Despite these potential errors in data leading to various mishaps, this part of the system has been ignored. This paper aims to experiment a model (Safety Data Model) to ensure safety of data used in the data analysis for decision making. It focuses on safety of data in a critical application, taking into consideration the integrity of the data, time taking to extract and publish the XML files to the data server. It, thus, represents the data in a more concise format that a consumer of such data can easily assess the sources and evaluate the integrity of the data before any decision-making. The research proposes an experimental evaluation of a safety data model that helps to prevent the possible mishaps. Twelve (12) Excel files of Safety Related Condition Reports (SRCR) data between 2002 and 2013 were used, which contain a total of 1039 rows of data. It took roughly 20.703 seconds to complete the Extract, Transform Load (ETL). Modern and sophisticated ETL software tools including Microsoft SQL Server 2012 Data Tools and Microsoft Structured Query Language (SQL) Server Management Studio were used in data manipulation. The prototype was able to filter data into safe, unsafe and hazardous data which were ready to be loaded into the Data Warehouse (DWs). The prototype was able to generate an XML document containing safe, unsafe and hazardous data. The prototype was proved to be effective because it was able to build XML Data within 0.484 seconds, merge and publish XML documents within 12.719 seconds. The purpose of this is to show the end users the actual data in order to justify whether the data is truly safe, truly unsafe or truly hazardous according to the quality summary in the metadata. The end user can also verify the whole data from the source if necessary and if the end user is satisfied with the quality of the data, then, the safe data can be extracted directly from the XML.en_US
dc.language.isoenen_US
dc.subjectBig-Dataen_US
dc.subjectData Warehouseen_US
dc.titleExperimental Evaluation of a Safety Data Model to determine the Time Taken to Build, Merge and Publish XML File onto a Data Serveren_US
dc.typeArticleen_US
Appears in Collections:Computer Science

Files in This Item:
File Description SizeFormat 
AFRJCICT_Experimental.pdf422.91 kBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.