Department of Computer Sciences
Permanent URI for this collection
Browse
Browsing Department of Computer Sciences by Title
Now showing 1 - 20 of 120
Results Per Page
Sort Options
- ItemA Comparative Study of Two Convolutional Neural Network Models for Detecting Rice Plant Diseases Using Online and Local Image Data(LAUTECH Journal of Computing and Informatics (LAUJCI), 2024-03) Toluwase A. OlowookereRice is one of the most widely staple foods around the globe, however, rice fields are severely affected by diseases, which can disrupt global food security. Early and accurate detection of rice diseases is essential for the recovery of such rice plants. Manually identifying rice plant diseases can be tedious and error prone. Artificial intelligence (AI) driven models, such as Convolutional Neural Networks (CNN) have proven very successful in the identification or detection of various crop diseases. This study, therefore, presents a comparative study of the effectiveness of two popular CNN architectures; ResNet and AlexNet for the detection of rice plant disease. The data used to train the models include a combination of rice leaf images that were gathered locally from a rice field/farm in Ede, Osun State, Nigeria, and from an online repository. The dataset consists of 5200 images classified into four classes: Bacterial leaf blight, Brown spot, Blast, and Healthy, each containing 1300 images. The effectiveness of the two trained models was measured using classification performance metrics including Accuracy, Precision, Recall, and F1-Score. The finding from the study showed that The ResNet has a test accuracy of 95.25% as against 92.91% for the AlexNet. The ResNet had 0.93 precision, while AlexNet recorded a precision of 0.24. For recall, the ResNet model had 0.98 while the AlexNet model had 0.23 and for the f1-score, the ResNet model had 0.95 while the AlexNet model had 0.24. Generally, the ResNet model outperformed the AlexNet model in detecting rice plant diseases, most significantly, brown spot disease.
- ItemA Super Learner Ensemble-based Intrusion Detection System to Mitigate Network Attacks(IEEE, 2024) Toluwase Ayobami OlowookereGovernments and corporate institutions are now mostly reliant on integrated digital infrastructures. These digital infrastructures are usually targets of cyber threats such as intrusion, for which intrusion detection systems (IDS) have emerged. One of the key needs for a robust IDS includes reducing the rate of false positives and thus improving accuracy. In this study, three traditional machine learning (ML) algorithms, including K-Nearest Neighbor (KNN), Naive Bayes (NB), and Decision Tree (DT), and three ensemble Machine Learning (ML) algorithms, including Random Forest (RF), Light Gradient Boosting Machine (LGBM), and Extreme Gradient Boosting (XGBOOST), were used on the UNSW-NB15 dataset from the Australian Centre for Cyber Security's Cyber Range Lab, to train intrusion detection models. A super-learner ensemble model was then built using the best two ensemble models (XGBOOST and RF) along with the best traditional model (KNN) as its base learners. The super-learner ensemble model was able to reduce false positives and improve detection accuracy with 98% accuracy. The model was then deployed in an IDS application to mitigate network attacks effectively and efficiently.
- ItemAn Adaptive Thresholding Algorithm-Based Optical Character Recognition System for Information Extraction in Complex Images(2021) Odim, MbaExtracting texts from images with complex backgrounds is a major challenge today. Many existing Optical Character Recognition (OCR) systems could not handle this problem. As reported in the literature, some existing methods that can handle the problem still encounter major difficulties with extracting texts from images with sharp varying contours, touching word and skewed words from scanned documents and images with such complex backgrounds. There is, therefore, a need for new methods that could easily and efficiently extract texts from these images with complex backgrounds, which is the primary reason for this work. This study collected image data and investigated the processes involved in image processing and the techniques applied for data segmentation. It employed an adaptive thresholding algorithm to the selected images to properly segment text characters from the image’s complex background. It then used Tesseract, a machine learning product, to extract the text from the image file. The images used were coloured images sourced from the internet with different formats like jpg, png, webp and different resolutions. A custom adaptive algorithm was applied to the images to unify their complex backgrounds. This algorithm leveraged on the Gaussian thresholding algorithm. The algorithm differs from the conventional Gaussian algorithm as it dynamically generated the blocksize to apply threshing to the image. This ensured that, unlike conventional image segmentation, images were processed area-wise (in pixels) as specified by the algorithm at each instance. The system was implemented using Python 3.6 programming language. Experimentation involved fifty different images with complex backgrounds. The results showed that the system was able to extract English character-based texts from images with complex backgrounds with 69.7% word-level accuracy and 81.9% character-level accuracy. The proposed method in this study proved to be more efficient as it outperformed the existing methods in terms of the character level percentage accuracy.
- ItemAn Adaptive Thresholding Algorithm-Based Optical Character Recognition System for Information Extraction in Complex Images(Journal of Computer Science, 2020-06-12) Ogunde, Adewale OpeoluwaExtracting texts from images with complex backgrounds is a major challenge today. Many existing Optical Character Recognition (OCR) systems could not handle this problem. As reported in the literature, some existing methods that can handle the problem still encounter major difficulties with extracting texts from images with sharp varying contours, touching word and skewed words from scanned documents and images with such complex backgrounds. There is, therefore, a need for new methods that could easily and efficiently extract texts from these images with complex backgrounds, which is the primary reason for this work. This study collected image data and investigated the processes involved in image processing and the techniques applied for data segmentation. It employed an adaptive thresholding algorithm to the selected images to properly segment text characters from the image’s complex background. It then used Tesseract, a machine learning product, to extract the text from the image file. The images used were coloured images sourced from the internet with different formats like jpg, png, webp and different resolutions. A custom adaptive algorithm was applied to the images to unify their complex backgrounds. This algorithm leveraged on the Gaussian thresholding algorithm. The algorithm differs from the conventional Gaussian algorithm as it dynamically generated the blocksize to apply threshing to the image. This ensured that, unlike conventional image segmentation, images were processed area-wise (in pixels) as specified by the algorithm at each instance. The system was implemented using Python 3.6 programming language. Experimentation involved fifty different images with complex backgrounds. The results showed that the system was able to extract English character-based texts from images with complex backgrounds with 69.7% word-level accuracy and 81.9% character-level accuracy. The proposed method in this study proved to be more efficient as it outperformed the existing methods in terms of the character level percentage accuracy.
- ItemAn Adaptive Thresholding Algorithm-Based Optical Character Recognition System for Information Extraction in Complex Images(Journal of Computer Science, 2020) Oguntunde, BosedeExtracting texts from images with complex backgrounds is a major challenge today. Many existing Optical Character Recognition (OCR) systems could not handle this problem. As reported in the literature, some existing methods that can handle the problem still encounter major difficulties with extracting texts from images with sharp varying contours, touching word and skewed words from scanned documents and images with such complex backgrounds. There is, therefore, a need for new methods that could easily and efficiently extract texts from these images with complex backgrounds, which is the primary reason for this work. This study collected image data and investigated the processes involved in image processing and the techniques applied for data segmentation. It employed an adaptive thresholding algorithm to the selected images to properly segment text characters from the image’s complex background. It then used Tesseract, a machine learning product, to extract the text from the image file. The images used were coloured images sourced from the internet with different formats like jpg, png, webp and different resolutions. A custom adaptive algorithm was applied to the images to unify their complex backgrounds. This algorithm leveraged on the Gaussian thresholding algorithm. The algorithm differs from the conventional Gaussian algorithm as it dynamically generated the blocksize to apply threshing to the image. This ensured that, unlike conventional image segmentation, images were processed area-wise (in pixels) as specified by the algorithm at each instance. The system was implemented using Python 3.6 programming language. Experimentation involved fifty different images with complex backgrounds. The results showed that the system was able to extract English character-based texts from images with complex backgrounds with 69.7% word-level accuracy and 81.9% character-level accuracy. The proposed method in this study proved to be more efficient as it outperformed the existing methods in terms of the character level percentage accuracy.
- ItemAnalysis of Employees' Engagement and Retention of Selected Banking Industry in Lagos State, Nigeria.(Uniosun Journal of Employment Relations and Management., 2019-01) Olaniyan, Oluwabunmi OmobolanleRetention of employees had become a major issue confronting the Nigerian banking industry; due to this course effective employee engagement was discovered to aid the retention of their valued employees. The banking sector in Nigeria faced the problems of wrong employee engagement and posed employee retention challenges. This paper examined the analysis of employees' engagement and retention of selected banking industry in Lagos State, Nigeria. The descriptive survey research design was adopted and the target population comprised 4,084 staff of the head offices of the selected banks in Lagos State. Sample size of 678 was used. A structured questionnaire was adapted, validated and used to collect data for the study. Simple random sampling technique was used to select the sample size and the response rate was 92.5%. Data were analyzed using descriptive and inferential statistical technique. Findings revealed that Employee engagement had positive and significant effect on employee retention of selected banks in Lagos State (â = 2 0.287; F = 109.815; R = 0.134; p<0.05). It was concluded that Employee (1,712) engagement had positive and significant effect on employee retention in selected deposit money banks in Lagos State, Nigeria. Therefore, it was recommended that there should be proper employee engagement in an organization.
- ItemAnalytic Hierarchy Process Model for Evaluation of Mobile Health Applications(2019-07) Olaniyan, Oluwabunmi OmobolanleAssessing the usability of mHealth apps is still a herculean task among software usability researchers/engineers as evaluating the usability attributes of these apps require substantial efforts from a wide range of knowledge domains and prospective users. Most usability models possess numerous attributes that can be adequately used to assess these apps but current usability techniques cannot effectively rank numerous qualitative and quantitative usability attributes simultaneously. Hence, the main objective of this work is to rank and prioritize the usability attributes embedded in a model for mobile app evaluation purposes. The model was designed hierarchically based on People at the Center of Mobile Application Development (PACMAD) MODEL and the Integrated Measurement Model (IMM). Attributes considered include efficiency, effectiveness, satisfaction, learnability, operability, user interface aesthetics and universality. They were ranked using their respective priority weights based on the Analytic Hierarchy Process (AHP). Pairwise comparison matrix was formulated based on decision makers’ judgements that were aggregated and normalized. Consistency of decision makers’ judgements was obtained using Saaty’s Eigen value and Eigen vector approach as a result of their simplicity and accuracy. Results of evaluation showed that efficiency and effectiveness had the highest priorities with 30% and 27% while satisfaction and user interface aesthetics had the lowest ranks with 6% and 5% respectively. Overall AHP group consensus results was 68%. In conclusion, it was established that the mathematical technique used is a powerful yet simple tool that has the ability to evaluate both the quantitative and qualitative usability attributes simultaneously. The work presented the assessment of a unified framework that combined the judgements from multiple levels of mHealth apps usability evaluation process. It is recommended that further studies extend the usability model used by increasing the number of usability attributes and for the evaluation to be done using other Multi-criteria Decision Making (MCDM) approaches and for the results obtained to be compared so as to be able to determine the differences or relationships between other MCDM techniques on usability models.
- ItemAn Android Based Blood Bank Information Retrieval System(Dovepress, 2019-10) Kayode, Aderonke AnthoniaBackground: Blood Bank record keeping has been carried out manually over the past decades using paper file management system which is slow for information retrieval and processing and also prone to errors in an emergency situation. Materials and methods: This research work solves the above-mentioned problem with the development of both web-based and Android-based blood bank information retrieval system. The web application is used by various blood banks system administrators to update their available blood inventory information and the mobile application which has the mobile search engine is used to search for blood supplies from the registered blood banks. Results and conclusion: The system also has a feature that allows registered blood banks to send a notification to registered blood donors on the application requesting for blood donation.
- ItemAPerformance Study of Selected Machine Learning Techniques for Predicting Heart Diseases(Springer, 2025-04) Olorunfemi, Blessing O.Heart Disease remains a leading cause of mortality worldwide. It alarmingly rises at a quick rate, making early heart disease prediction crucial for effective prevention and timely intervention. Heart disease diagnosis is a difficult process that requires technical skills and accuracy to complete. With improvements in technology, computing has lent its voice to simplify the diagnosis of various health problems. Machine learning uses past or existing history to predict future results. Various machine learning techniques have been developed over the years and used in predicting heart diseases with various levels of performance. Identifying the best-suited machine learning technique to use for prediction purposes can be a challenging task. This research work analyses the performance of seven (7) machine learning techniques, comprising AdaBoost Algorithm, KNN, Logistic Regression, Naïve Bayes Classifier, Random Forest, SVM, and XGBoost. The heart disease dataset was downloaded from the UCI repository and analysed using Python programming language in the Jupyter Notebook environment. A comparative analysis of the seven (7) techniques was performed based on Accuracy, Precision, and Recall. From the results obtained, KNN, Random Forest, and XGBoost showed superior performance over the others with an accuracy of 100%, AdaBoost Algorithm followed with an accuracy of 92.2%, SVM followed with an accuracy of 91.71%, Naïve Bayes Classifier followed with an accuracy of 88.29% while Logistic Regression has the least accuracy of 86.34%. KNN, RF, and XGBoost outperformed AdaBoost, SVN, and LR
- ItemApplication of Data Mining Algorithms for Feature Selection and Prediction of Diabetic Retinopathy(Springer Nature Switzerland, 2019-06) Kayode, Aderonke AnthoniaDiabetes Retinopathy is a disease which results from a prolonged case of diabetes mellitus and it is the most common cause of loss of vision in man. Data mining algorithms are used in medical and computer fields to find effective ways of forecasting a particular disease. This research was aimed at determining the effect of using feature selection in predicting Diabetes Retinopathy. The dataset used for this study was gotten from diabetes retinopathy Debrecen dataset from the University of California in a form suitable for mining. Feature selection was executed on diabetes retinopathy data then the Implementation of k-Nearest Neighbour, C4.5 decision tree, Multi-layer Perceptron (MLP) and Support Vector Machines was conducted on diabetes retinopathy data with and without feature selection. There was access to the algorithms in terms of accuracy and sensitivity. It is observed from the results that, making use of feature selection on algorithms increases the accuracy as well as the sensitivity of the algorithms considered and it is mostly reflected in the support vector machine algorithm. Making use of feature selection for classification also increases the time taken for the prediction of diabetes retinopathy.
- ItemApplication of Data Mining and Knowledge Management for Business Improvement: An Exploratory Study(Foundation of Computer Science FCS, New York, USA, 2015-02) Kayode, Aderonke AnthoniaIn recent years, there have been a lot of approaches employed by organizations to satisfy their customers and gain competitive advantage. Continuous development of Information System applications is also changing the ways in which businesses are conducted. From scanning barcodes at point of sale (POS) to shopping on the web, businesses are generating large volume of data about products and consumers which are being stored in different data repositories. While a lot of useful knowledge about products, sales and customers that can assist in business decisions are locked away in these databases unexploited. However, the need for organizations to survive in this dynamic business environment depends on how proactive they change these data into useful knowledge which can aid value creation. Presently, customer relationship management and marketing turn out to be the domains which have the potentials to utilize data mining techniques for decision support. This paper examines how business can improve on their performance through utilization of knowledge management (KM) and data mining (DM) applications to manage and support their strategies. Lastly, synergies and challenges of implementation of KM and DM as a tool in business are also critically analysed.
- ItemAssessment of Selected Data Mining Classification Algorithms for Analysis and Prediction of Certain Diseases(University of Ibadan Journal of Science and Logics in ICT Research (UIJSLICTR), 2020-03) Oguntunde, BosedeMedical science generates large volumes of data stored in medical repositories that could be useful for extraction of vital hidden information essential for diseases diagnosis and prognosis. In recent times, the application of data mining to knowledge discovery has shown impressive results in disease analysis and prediction. This study investigates the performance of three data mining classification algorithms, namely decision tree, Naïve Bayes, and k-nearest neighbour in predicting the likelihood of the occurrence of chronic kidney disease, breast cancer, diabetes, and hypothyroid. The datasets which were obtained from the UCI Machine were split into 60% for training and 40% for testing on the one hand and 70% for training and 30% for testing on the other hand. The performance parameters considered include classification accuracy, error rate, execution time, confusion matrix, and area under the curve. Waikato Environment for Knowledge Analysis (WEKA) was used to implement the algorithms. The findings from the analysis showed that decision tree recorded the highest prediction accuracy followed by the Naïve Bayes and k-NN algorithm while k-NN recorded the minimum execution time on the four datasets. However, k-NN also has the largest average percentage error recorded on the datasets. The findings, therefore, suggest that the performance of these classification algorithms could be influenced by the type and size of datasets.
- ItemAssessment of Selected Data Mining Classification Algorithms for Analysis and Prediction of Certain Diseases(2020) Odim, MbaMedical science generates large volumes of data stored in medical repositories that could be useful for extraction of vital hidden information essential for diseases diagnosis and prognosis. In recent times, the application of data mining to knowledge discovery has shown impressive results in disease analysis and prediction. This study investigates the performance of three data mining classification algorithms, namely decision tree, Naïve Bayes,and k-nearest neighbour in predicting the likelihood of the occurrence of chronic kidney disease, breast cancer, diabetes, and hypothyroid. The datasets which were obtained from the UCI Machine were split into 60% for training and 40% for testing on the one hand and 70% for training and 30% for testing on the other hand. The performance parameters considered include classification accuracy, error rate, execution time, confusion matrix, and area under the curve. Waikato Environment for Knowledge Analysis (WEKA) was used to implement the algorithms. The findings from the analysis showed that decision tree recorded the highest prediction accuracy followed by the Naïve Bayes and k-NN algorithm while k-NN recorded the minimum execution time on the four datasets. However, k-NN also has the largest average percentage error recorded on the datasets. The findings, therefore, suggest that the performance of these classification algorithms could be influenced by the type and size of datasets.
- ItemAtomic Commit in Distributed Database Systems: The Approaches of Blocking and Non-Blocking Protocols(International Journal of Engineering Research & Technology, 2014-10) Olowookere, Toluwase AyobamiIn distributed database systems, the primary need for commit protocols is to maintain the atomicity of distributed transactions. Atomic commitment issue is of prime importance in the distributed system and the issue becomes more necessary to deal with if some of the sites participating in the execution of the transaction commitment fail. Several atomic commit protocols have evolved to terminate distributed transactions. This paper presents an overview of a distributed transaction model, and a description of the two phase commit (2PC) protocol (which is blocking) and the one phase (1PC) commit protocols (which is non-blocking). This paper further examines the assumptions of these commit protocols in their bid to addressing the atomic commitment issue in distributed database systems. By restricting possible encountered failure to site failure, drawbacks in the assumptions of these atomic commit protocols were identified, which clearly show that the nonblocking protocol studied addresses the drawbacks of the widely used blocking protocol, 2PC, but in itself is no messiah (as it also constitutes drawbacks in practice). This work will spur other researchers to a more vigorous reconsideration of the 1PC nonblocking protocol.
- ItemAn Automated Mammogram Classification System using Modified Support Vector Machine.(Dovepress, 2019-08-15) Kayode, Aderonke AnthoniaPurpose: Breast cancer remains a serious public health problem that results in the loss of lives among women. However, early detection of its signs increases treatment options and the likelihood of cure. Although mammography has been established to be a proven technique of examining symptoms of cancer in mammograms, the manual observation by radiologists is demanding and often prone to diagnostic errors. Therefore, computer aided diagnosis (CADx) systems could be a viable alternative that could facilitate and ease cancer diagnosis process; hence this study. Methodology: The inputs to the proposed model are raw mammograms downloaded from the Mammographic Image Analysis Society database. Prior to the classification, the raw mammograms were preprocessed. Then, gray level co-occurrence matrix was used to extract fifteen textural features from the mammograms at four different angular directions: θ={0°, 45°, 90°, 135°}, and two distances: D={1,2}. Afterwards, a two-stage support vector machine was used to classify the mammograms as normal, benign and malignant. Results: All of the 37 normal images used as test data were classified as normal (no false positive) and all 41 abnormal images were correctly classified to be abnormal (no false negative), meaning that the sensitivity and specificity of the model in detecting abnormality is 100%. After the detection of abnormality, the system further classified the abnormality on the mammograms to be either “benign” or “malignant”. Out of 23 benign images, 21 were truly classified as benign. Also, out of 18 malignant images, 17 were truly classified to be malignant. From these findings, the sensitivity, specificity, positive predictive value, and negative predictive value of the system are 94.4%, 91.3%, 89.5%, and 95.5%, respectively. Conclusion: This article has further affirmed the prowess of automated CADx systems as a viable tool that could facilitate breast cancer diagnosis by radiologists.
- ItemAutomatic Segmentation of Retinal Blood Vessels of Diabetic Retinopathy Patients using Dempster-shafer Edge Based Detector(ANSInet, 2019-06-15) Kayode, Aderonke AnthoniaBackground and Objective: Diabetic Retinopathy (DR) is a micro-vascular complication of diabetes which results in the alteration or total damage of retinal blood vessels. This is responsible for most partial loss of sight and blindness among diabetic patients across nations of the world. Early examination of retinal blood vessels could help in the detection and diagnosis of the symptoms of DR thereby curtailing its effects. Methodology: Dempster-shafer edge based detector was used to segment retinal blood vessels from retinal images sourced from Digital Retinal Image for Vessel Extraction (DRIVE). Prior to the segmentation, median filter, Contrast Limited Adaptive Histogram Equalization (CLAHE) and mahalanobis distance algorithms were used to preprocess the raw retinal images so that accurate blood vessels detection and segmentation will be achieved. Results: A segmentation accuracy of 0.9765 was recorded when receiver operating characteristics of the technique was computed. This showed that an acceptable degree of blood vessel segmentation was achieved. Furthermore, the segmented blood vessels are publicly available for academic and research purposes. Conclusion: Dempster-shafer edge based detector has been further shown to be an effective algorithm for blood vessels segmentation in healthy as well as DR retinal images.
- ItemBlockchain Mechanism Approach to Smothering of Denial of Service (DoS) Spikes: A Focus on Internet of Things (IoT) Technologies(International Journal of Research and Scientific Innovation, 2024-09) Gbeminiyi FalowoDenial of Service (DoS) is a cybercrime that attempts to impede electronic consumers from accessing websites and online services by saturating a server with internet traffic. Cyber-spikers use a network of infected computers, tools like bots, and other machines they can access remotely. A decade ago, businesses and financial institutions lost approximately half a trillion dollars due to DOS spikes. DoS savages would triple in number before the closure of the year 2023 from about eight million less than five years ago. This study uses a blockchain-based decentralized authentication technique to guard against DoS attacks on the application layer of Internet of Things (IoT) technologies. This secured mechanism involves starting the communication process, developing the system, and suggesting an intelligent contract. Performance evaluation of the developed model was carried out by comparing the approaches’ temporal complexity. The recommended method was also used on two processors operating at two distinct speeds while utilizing the SolarWinds application, an online CPU stress test, and usage with a deduction that the second is preferred. An Intelligent contract for IoT machine usage is established to authorize the blockchain level.
- ItemBoothstrap Method for Measures of Statistical Accuracy(African Journal of Pure and Applied Sciences, 2008) Oguntunde, BosedeWe introduced bootsrap method for dependent data structure and emphasis is on the construction of efficient inferential procedures for an estimator as a measure of its statistical accuracy, such as standard error, bias, ratio, coefficient of variantion and root mean square error. it was illustaretd with real time series data structure.
- ItemBuilding Data-Driven Decision Support System for Pragmatic Leadership.(EDUCERE - Journal of Educational research, 2006) Oguntunde, BosedeDecision Support System (DSS) is an interactive software-based system that assists leaders (decision makers) compile, analyze and manipulate information from raw-data documents, knowlede frameworks and/or business models to identify and solve problems and make decisions. In general, DSS's design and implementations are classified as data-driven, model-driven, knowledge-driven, document-driven and communication driven. Taxonomically, DSS could be passive, active, cooperative. A passive DSS is a system that aids the process of decision making, but that cannot bring out decision, suggestion or solutions. an active DSS can bring out such. A cooperative DSS allows the decision maker modify, complete, or refine the decision suggestions provided by the system, before sending them back to the system for validation. This paper focused on cooperative Data Driven DSS. Data-Drive DSS emphasizes access to and manipulation of time-series of internal organizatinal data and at times external data using Database Queries and On-Line Analytical Processing (OLAP0 tools.Thus, help managers (leaders) make prompt decision from the available data and models easily. The methodology forthe research is IDEFIX approach, nomally referred to as BOTTOM_UP approach to project work. The DSS is to speed-up data analysis for prompt decision-making through data model of relational Database Management System (RDBMS). the implementation optimizes the use of Mathematical Relational Algebra model for various report generation. it is implementable at any level, for practical, reality and pragmatic leadership qualities.
- ItemComparative Analysis of Some Programming Languages(Transnational Journal of Science and Technology, 2012) Oguntunde, BosedeProgramming languages are used for controlling the behavior of computer machines. Several programming languages exist and new are being created always. These programming languages become popular with use to different programmers because there is always a tradeoff between ease of learning and use, efficiency and power of expression. In this work we examine six programming languages two from different groups of scientific, non scientific and object oriented programming languages. We present an algorithm for performing combination and permutation to implement the comparison. Two parameters, the memory consumption and running time requirement are tested and objected oriented programming languages perform better in term of their running times although same could not be said of them in term of memory requirements.