Department of Computer Engineering
Permanent URI for this collection
Browse
Browsing Department of Computer Engineering by Issue Date
Now showing 1 - 20 of 29
Results Per Page
Sort Options
- ItemPerformance Evaluation Of Selected Principal Component Analysis-Based Techniques For Face Image Recognition(INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH, 2015) Janet O. JoodaPrincipal Component Analysis (PCA) is an eigen-based technique popularly employed in redundancy removal and feature extraction for face image recognition. In this study, performance evaluation of three selected PCA-based techniques was conducted for face recognition. Principal Component Analysis, Binary Principal Component Analysis (BPCA), and Principal Component Analysis – Artificial Neural Network (PCA-ANN) were selected for performance evaluation. A database of 400, 50x50 pixels images consisting of 100 different individuals, each individual having 4 images with different facial expressions was created. Three hundred images were used for training while 100 images were used for testing the three face recognition systems. The systems were subjected to three selected eigenvectors: 75, 150 and 300 to determine the effect of the size of eigenvectors on the recognition rate of the systems. The performances of the techniques were evaluated based on recognition rate and total recognition time.The performance evaluation of the three PCA-based systems showed that PCA – ANN technique gave the best recognition rate of 94% with a trade-off in recognition time. Also, the recognition rates of PCA and B-PCA increased with decreasing number of eigenvectors but PCA-ANN recognition rate was negligible.
- ItemDevelopment of a Modified Simulated Annealing to School Timetabling Problem(International Journal of Applied information Systems, 2015) Janet O. JoodaThis work presents a modified simulated annealing applied to the process of solving a typical high school timetabling problem. Preparation of a high school timetable consists basically of fixing a sequence of meetings between teachers and students in a prefixed period of time in such a way that a certain set of constraints of various types is satisfied. The approach presented in the paper has been successfully used to schedule the first time school timetable of Fakunle Comprehensive High School, Osogbo Nigeria during the 2012/2013 session and it was capable of generating timetables for complex problem instances. A task involving 18 Classes, 45 Teachers and 15 Subjects for Junior Secondary School (JSS) with 3 Levels (JSS 1 to JSS 3), and 6 arms each; and 24 Classes, 77 Teachers and 19 Subjects for Senior Secondary School (SSS), with 3 Levels (SSS 1 to SSS 3), and 8 arms (3 for Science Group, 3 for Commercial Group, and 2 for Art Group), for 6 hours, 5 days respectively. The use of the implemented model resulted in significant time saving in the scheduling of the timetables, and a well spread lessons for the teachers. Also none of the teachers and classes was double booked. It was clearly evident that the developed modified simulated annealing reduces the major weakness of slow convergence (convergence at excessive time) associated with the classical simulated annealing.
- ItemDesign Issues in Sentiment Analysis for Yorùbá Written Text(Ife Journal of Science and Technology, 2019) Omolayo AbegundeSentiment Analysis (SA) is an exciting and important field in Artificial Intelligence combining Human Language Processing, Machine Learning and Psychology. It is a means of understanding a user’s opinion about an event. The goal of SA is to get opinion expressed in implied text, targets of the opinion and reason for the opinion. Conversely, a great number of research efforts are dedicated to English language data, while a countless share of information is obtainable in other languages as well but none yet for Yorùbá. This work examines the design issues with respect to automating SA for standard Yorùbá language. The process of SA which includes data cleaning, data annotation etc. is highlighted. The structure of the Yorùbá text is described and a text corpus design for Yorùbá sentiment analysis system is presented. The outcome of this work provided suitable requirements for the design.
- ItemPredicting Sentiment in Yorùbá Written Texts: A Comparison of Machine Learning Models(Springer Nature Switzerland, 2020) Omolayo AbegundeSentiment analysis (SA) provides a rich set of tools and techniques for extracting and evaluating subjective information from large datasets. Users' opinions concerning an event are what determine the user perspective of such event, whether it is good or bad. This study compared three machine learning models (logistic regression, Naı̈ve Bayes, and support vector machine) with a view to identifying the best model for predicting sentiment in Yorùbá written texts at the sentence level. The corpus of Yorùbá records was created from several online and offline sources such as dictionaries, experts, the Bible, and social media, as well as the Awayoruba blog, and processed using Tákàdá. The system was implemented using the Python programming language and evaluated using mean opinion score and receiver operating characteristics. The research concludes that Naı̈ve Bayes (NB) outperforms other algorithm for analysis of sentiments for Yorùbá sentences.
- ItemAuthorship Verification of Yorùbá Blog Posts using Character N-grams(ICMCECS (IEEE), 2020) Omolayo AbegundeThe task of determining whether a pair (or more) documents were written by the same author comes under authorship verification. N-grams are sequences of elements appearing in texts; they can be words, POS tags, characters, or some other elements that can be encountered one after another in texts. The tasks in authorship verification were more challenging as it focused on whether the target author and the text to be used have a closely related style. In this paper, an authorship verification task on Yorùbá blog posts is hereby presented. N-grams features were extracted from the corpus, and inductive learning techniques were applied to build feature-based models in order to perform the automatic author identification. The K-means clustering algorithm was used in the study since the supervised algorithm cannot be applied to the one-class classification of the dataset. The evaluation was done with the Silhouette Coefficient algorithm, which is used to evaluate unlabeled data. The result obtained is positive, which indicates the data points have a strong relationship with the dataset. The obtained result signifies a yes relationship between the posts. This signifies that the posts were from the same author.
- ItemDEVELOPMENT OF A PREDICTIVE FUZZY LOGIC MODEL FOR MONITORING THE RISK OF SEXUALLY TRANSMITTED DISEASES (STD) IN FEMALE HUMAN(International Research Journal of Engineering and Technology, 2020) Janet O. JoodaThe purpose of this study is to develop a classification model for monitoring the risk of sexually transmitted diseases (STDs) among females using information about non-invasive risk factors. The specific research objectives are to identify the risk factors that are associated with the risk of STDs; formulate the classification model; and simulate the model. Structured interview with expert physicians was done in order to identify the risk factors that are associated with the risk of STDs Nigeria following which relevant data was collected. Fuzzy Triangular Membership functions was used to map labels of the input risk factors and output STDs risk of the classification model identified to associated linguistic variables. The inference engine of the classification model was formulated using IF-THEN rules to associate the labels of the input risk factors to their respective risk of still birth. The model was simulated using the fuzzy logic toolbox available in the MATLAB® R2015a Simulation Software. The results showed that 9 non-invasive risk factors were associated with the risk of STDs among female patients in Nigeria. The risk factors identified were marital status, socio- economic status, toilet facility used, age at first sexual intercourse, practice sex protection, sexual activity (in last 2 weeks), lifetime partners, practice casual sex and history of STDs. 2, 3 and 4 triangular membership functions were appropriate for the formulation of the linguistic variables of the factors while the target risk was formulated using four triangular membership functions for the linguistic variables no risk, low risk, moderate risk and high risk. The 2304 inferred rules were formulated using IF-THEN statements which adopted the values of the factors as antecedent and the STDs as consequent part of each rule. This study concluded that using information about the risk factors that are associated with the risk of STDs, fuzzy logic modeling was adopted for predicting the risk of STD based on knowledge about the risk factors.
- ItemInfluence of Eigenvector on Selected Facial Biometric Identification Strategies(World Journal of Engineering Research and Technology, 2020-02-16) Jooda, JanetFace identification strategies are becoming more popular among biometric-based strategies as it measures an individual‟s natural data to authenticate and identify individuals by analyzing their physical characteristics. For face identification system to be efficient and robust to serve it purpose of security, there is need to use the best strategy out of the many strategies that have been proposed in literatures for face identification. Amidst the most popularly used face identification strategies, Principal Component Analysis PCA, Binary Principal Component Analysis BPCA, and Principal Component Analysis – Artificial Neural Network PCA-ANN were selected for performance evaluation. The research was experimented by varying the eigenvector of the training images for each strategy to compare the performance using Recognition Rate RR and Total Recognition Time TR as performance metrics. Results showed that PCA – ANN strategy gave the best recognition rate of 94% with a trade-off in recognition time. Also, the recognition rates of PCA and B-PCA increased with decreasing number of eigenvectors but PCA-ANN recognition rate was negligible. Hence PCA-ANN outperforms the other face identification strategies.
- ItemSelected Soft Computing Algorithms For Solving Travelling Salesman Problem(International Journal of Progressive Sciences and Technologies,, 2021) Janet O. JoodaTraveling Salesman Problem (often called TSP) is a classic algorithmic problem in the field of computer science and operations research. It is focused on optimization. In this context, better solution often means a solution that is cheaper, shorter, or faster. TSP is a mathematical problem. It is most easily expressed as a graph describing the locations of a set of nodes. Given a set of cities and distance between every pair of cities, the problem of Traveling Salesman Problem is to find the shortest possible route that visits every city exactly once and returns to the starting point. The aim of this project is to adapt Bat, Bee, Firefly, and Flower pollination algorithms, implement and evaluate the selected algorithms for solving Travelling Salesman Problem.
- ItemDesign and Implementation of Mobile Information System for Federal Road Safety Corps (FRSC) of Nigeria(International Journal of Sensor Networks and Data Communications, 2021) Omolayo AbegundeWith a daily increase in the use of mobile devices in the 21st century, handheld devices are fast reaching the unreached and information is now easily disseminated. Nigeria, as a developing nation in the western Africa needs to be all information technology compliant. Far from this, vehicles have been registered manually. This mobile information system is designed to aid the every member of the Nigeria community in building an information network with the Federal Road Safety Corps (FRSC). Motorists, drivers and others who had registered their vehicles manually would be able to register their vehicles number plates and report accident victims to the Corps with ease from their mobile devices. This work focuses mainly on the vehicle registration, issuing number plates and information dissemination to the Federal Road Safety Corps, Nigeria.
- ItemA Comprehensive Analysis of COVID-19 Spread in Nigeria(Baze Universityc, 2021) Omolayo AbegundeThe COVID-19 pandemic emanated from China was not only unexpected by the rest of the world, but it also resulted in an economic downturn. In Nigeria, attempts have been made at different levels of government to combat the virus' spread, with some promising results. In this paper, we looked at the impact of the spread from February 29 to December 27, 2020 to see what it was like. The findings were based on data from reported cases, deaths, recoveries, and active cases. The data was preprocessed and feature engineering was performed by adding new features (active, days, month). The pandas library were used to analyse the two sets of data. The results of the analysis provide us with a description of the COVID-19 pandemic's spread in Nigeria and the need to slow it down even further.
- ItemComparative Analysis of Feature Level Fusion Bimodal Biometrics for Access Control(International Journal of Progressive Sciences and Technologies,, 2021) Janet O. JoodaThe increasing quest for dependable, robust and secure recognition systems led to combining two or more biometric modalities for improved performance of a biometric system. Bimodal biometric systems have proven to achieve obvious advantages over unimodal systems in various applications such as access control, surveillance, forensics, deduplication and border control etc. In this study, a comparative analysis of combination of three biometric trait at feature level of fusion was carried out. Face, fingerprint and iris images were acquired from LAUTECH biometric database. The bimodal setup consists of face-iris, face-fingerprint and iris-finger modalities. Principal Component Analysis (PCA) was employed for feature extraction, weighted sum technique was used to fuse the images at feature level while Support Vector Machine (SVM) was used for classification. Experimental result revealed that the bimodal biometric achieved an improved performance than the unimodal biometric. The performances of the bimodal systems indicated that combination of face and iris features achieved the best performance with FAR, FRR and accuracy of 0.00%, 1.42% and 99.00% at 38.15 seconds. Hence, a bimodal face-iris recognition system would produce a more reliable security surveillance system for access control than other combination compared in this study.
- ItemFingerprint Intramodal Biometric System Based on ABC Feature Fusion(Asian Journal of Research in Computer Science, 2021-08-13) Jooda, JanetUnimodal biometrics system (UBS) drawbacks include noisy data, intra-class variance, inter-class similarities, non-universality, which all affect the system's classification performance. Intramodal fingerprint fusion can overcome the limitations imposed by UBS when features are fused at the feature level as it is a good approach to boost the performance of the biometric system. However, feature level fusion leads to high dimensionality of feature space which can be overcame by Feature Selection (FS). FS improves the performance of classification by selecting only relevant and useful information from extracted feature sets being an optimization problem. Artificial Bee Colony (ABC) is an optimizing algorithm that has been frequently used in solving FS problems because of its simple concept, use of few control parameters, easy implementation and good exploration characteristics. ABC was proposed for optimized feature selection prior to the classification of Fingerprint Intramodal Biometric System (FIBS). Performance evaluation of ABCbased FIBS showed the system had a Sensitivity of 97.69% and RA of 96.76%. The developed ABC optimized feature selection reduced the high dimensionality of features space prior to classification tasks thereby increasing sensitivity and recognition accuracy of FIBS.
- ItemA Review on Hybrid Artificial Bee Colony for Feature Selection(Global Journal of Advanced Research, 2021-08-30) Jooda, JanetDue to the presence of redundant and irrelevant features in the dataset, the feature space's high dimensionality has an impact on classification accuracies and computational complexity. Feature Selection gets the most relevant and valuable information and aids in classification speed. Since finding the suitable, optimal feature subset is critical, feature selection is viewed as an optimization problem. One of the efficient nature-inspired optimization techniques for handling combinatorial optimization issues is the Artificial Bee Colony algorithm. It has no sensitive control parameters and has been demonstrated to compete with other well-known algorithms. However, it has a poor local search performance, with the equation of solution search in ABC performing well for exploration but poorly for exploitation. Furthermore, it has a quick convergence rate and can thus become caught in local optima for some complex multimodal situations. Since its introduction, much research has been conducted to address these issues in order to make ABC more efficient and applicable to a wide range of applications. This paper provides an overview of ABC advances, applications, comparative performance, and future research opportunities.
- ItemDevelopment of Turbo Code Error Detection and Correction Scheme for Wireless Telemedicine Video Transmission(LAUTECH Journal of Computing and Informatics (LAUJCI), 2021-09) OLAYINKA OYERONKE OYEFUNKETransmission of medical images and videos to a distant location over a wireless network for a proper diagnosis of the patient is a core aspect of telemedicine. During data transmission over the wireless communication channel, noise and other impairments are introduced into data and this causes error in the transmitted data. Hence, there is a need for a method to detect and correct error which may lead to an erroneous diagnosis. Turbo codes happens to be the earliest error-correcting codes with the intention of establishing a dependable communications near the channel capacity with basically possible hardware. It has an excellent error correcting capabilities, which make it appropriate for many internet communications technology. This paper has come up with an effective and efficient method to detect and correct error encountered during the transmission of telemedicine video over the wireless channel with the use of a parallel concatenated Turbo code error detection and correction scheme. A MATLAB Simulation was carried out to investigate and demonstrate the performance of the proposed system. The performance of the developed system was taken at different ranges of SNR with BER, PSNR, MSE and the processing time of the decoders. The developed system was compared with when turbo code is not applied during transmission. The results of the simulation shows a better performance when turbo code was applied compared to when it was not applied
- ItemDevelopment of a face recognition system using hybrid Genetic-principal component analysis(1st International Conference on Electrical, Electronic, Computer Engineering & Allied Multidisciplinary Field, 2021-12) Ibikunle, AkinolaHumans have been using physical attributes such as face, voice gait and fingerprints to recognize each other for ages. With the recent technological advancement, face recognition is a branch of biometrics system which has received considerable interest because of its ease in collecting, analysing and recognising face images. It is a system which compares an unknown image against the trained images in a database in order to identify the image. It has a number of applications such as Automatic Teller Machine (ATM), credit card, physical access control, National Identity card and correctional facilities. It has been found to be one of the ways of controlling and reducing crime rate. The development and evaluation of the performance of a face recognition system using hybrid Genetic- principal component Analysis technique is presented. The system consists of three major subsystems. Initial preprocessing procedures are applied on the input face images selected from the ORL Database. Consequently, face features are extracted from the processed images by principal component analysis and finally face identification is carried out using Genetic algorithm. Image resolutions of 50 x 50, 70 x 70, 100 x 100 and 140 x 140 are used in training and testing the system. The identification rates obtained were 100%, 96.36%, 93.63% and 90.90% for 50 x 50, 70 x 70, 100 x 100 and 140 x 140 respectively. This experimental result revealed that the lower the resolution of the cropped images, the higher the number of the correctly identified face images. The reason is attributed to the fact that there is variation in the features considered for recognition for each resolution. Hence, this technique has been proved to be more robust and suitable for low resolution.
- ItemLong-Short-Term Memory Model for Fake News Detection in Nigeria(Ianna Journal of Interdisciplinary Studies, 2023) Janet O. JoodaBackground: The advent of technology allows information to be passed through the Internet at a breakneck speed and enables the involvement of many individuals in the use of different social media platforms. Propagation of fake news through the Internet has become rampant due to digitalisation, and the spread of fake news can cause irreparable damage to the victims. The conventional approach to fake news detection is time-consuming, hence introducing fake news detection systems. Existing fake news detection systems have yielded low accuracy and are unsuitable in Nigeria. Objective: This research aims to design and implement a framework for fake news detection using the Long-Short Term Memory (LSTM) model. Methodology: The dataset for the model was obtained from Nigerian dailies and Kaggle and pre-processed by removing punctuation marks and stop words, stemming, tokenisation and one hot representation. Feature extraction was done on the datasets to remove outliers. The locally acquired dataset from Nigeria was balanced using Synthetic Minority Oversampling Techniques (SMOTE) Long-Short Term Memory (LSTM), a variant of Recurrent Neural Network (RNN)- which solved the problem of losing gained knowledge and information over a long period faced by RNN- was used as the detection model This model was implemented using Python 3.9. The model detected fake news by classifying real and fake news approaches. The dataset was fed into the model, and the model classified them as either fake or real news by processing the dataset through input and hidden layers of varying numbers of neurons. accuracy F1 score and detection time were used as the evaluation metrics. The results were then compared to some selected machine learning models and a hybrid of convolutional neural networks and long short-term memory models (CNN-LSTM). Results: The result shows that the LSTM model on a balanced dataset performed best as the two news classes were accurately classified, giving an average detection accuracy of 92.86%, which took the model 0.42 seconds to detect whether news was real or fake. Also, 87.50% average detection accuracy was obtained from an imbalanced dataset. Compared to other machine learning models, SVM and CNN-LSTM gave 81.25% accuracy for imbalanced datasets and 82.14% and 78.57% for balanced datasets, respectively. Conclusion: The outcome of this research shows that the deep learning approach outperformed some machine learning models for fake news detection in terms of performance accuracy.
- ItemAutomatic Plagiarism Detection Using Fuzzy-Logic(Dutse Journal of Pure and Applied Sciences (DUJOPAS),, 2023) Janet O. JoodaPlagiarism occurs when a researcher copies fellow researcher’s work verbatim without acknowledging the author. This work developed an automatic plagiarism detector using fuzzy logic. The system developed was tested with 4 different text documents and evaluated using portability, efficiency, functionality, ease of use and accuracy metrics. Results show that the developed plagiarism detector is very easy to use with high functionality and accuracy as well as moderate efficiency and portability based on user’s assessment. However, future work can increase the data size for model training and consider machine learning techniques to improve accuracy.
- ItemAn Automatic Door Lock Security System Based on Convolutional Neural Network(Dutse Journal of Pure and Applied Sciences (DUJOPAS),, 2023) Janet O. JoodaDoor lock security is important because it prevents intruders or unauthorized individuals from entering our homes or offices. Previous door lock security systems based on password, radio frequency identification and facial recognition are unreliable. Therefore, this project developed an automatic door lock security system using Convolutional Neural Network. Fifty faces of home owners were captured and trained using CNN. The system was tested at different distances and lightning condition and evaluated using accuracy, precision and f1 score. Results show that the developed system is 91.67% accurate. However, it is recommended that future work considers increasing dataset for model training to obtain more accurate results.
- ItemNovel Method to Ensure Security in Telecommunications Systems(Asian Basic and Applied Research Journal, 2023) Janet O. JoodaData in the cloud and all forms of wireless communication are susceptible to many forms of attack. Forming hybridized cipher with symmetric and asymmetric algorithms allays the fear of security concern having identified the weakness of single-layer encryption. However, the hybrized cipher requires exchange of secret keys between sender and receiver and also paid little attention to throughput. Hence, this research removes the need to share secrete key for a private key by developing a hybrid cipher using the output Elliptic Curve Cryptography (ECC) for key exchange of symmetric key cipher, RC4c.The stages involved in the developed ECCRC4c algorithm are ECC and RC4c encryption. Varying lengths of data in step of 8 bits were used to estimate the performance metrics of the proposed system of this research. The results showed a significant improvement in the throughput and computation time on the existing algorithm. There was an improvement of 94.6% in throughput and 89.9% in computation time which implied power savings. Performance metrics of a hybrid cryptographic algorithm are proportional to performance metrics of the components ciphers as shown from the individual metrics of DES, RSA, ECC, RC4c, RSADES and ECCRC4c.
- ItemFeature Fusion Using GSA for Multi- Instance Authentication System(Asian Research Journal of Current Science, 2023) Janet O. JoodaMulti-instance fusion of fingerprint authentication system at score level overcomes a few of the shortcomings of a Unimodal Biometric System (UBS) and enhanced the efficiency of the system. However, due to loss of information at higher levels, the features fused at the score level are confined in comparison to feature level fusion and could lead to poor performance. In this study, multi-instance fusion of fingerprints was done at feature level using Gravitational Search Algorithm (GSA) to select and combine minimal relevant informative texture features subsets from multi- instances of fingerprint and considerably improves the performance of the system. The approach was validated by creation of multi-instances of fingerprint database acquired locally from 150 subjects in an uncontrolled environment and texture based feature extraction was considered and classification of fused texture feature was done using back propagation neural network. The results show that the presented technique was effective in subject authentication with accuracy of 97.09%, indicating that it can successfully secure fingerprint authentication systems from unauthorized attacks.