Department of Computer Sciences
Permanent URI for this collection
Browse
Browsing Department of Computer Sciences by Title
Now showing 1 - 20 of 111
Results Per Page
Sort Options
- ItemAn Adaptive Thresholding Algorithm-Based Optical Character Recognition System for Information Extraction in Complex Images(2021) Odim, MbaExtracting texts from images with complex backgrounds is a major challenge today. Many existing Optical Character Recognition (OCR) systems could not handle this problem. As reported in the literature, some existing methods that can handle the problem still encounter major difficulties with extracting texts from images with sharp varying contours, touching word and skewed words from scanned documents and images with such complex backgrounds. There is, therefore, a need for new methods that could easily and efficiently extract texts from these images with complex backgrounds, which is the primary reason for this work. This study collected image data and investigated the processes involved in image processing and the techniques applied for data segmentation. It employed an adaptive thresholding algorithm to the selected images to properly segment text characters from the image’s complex background. It then used Tesseract, a machine learning product, to extract the text from the image file. The images used were coloured images sourced from the internet with different formats like jpg, png, webp and different resolutions. A custom adaptive algorithm was applied to the images to unify their complex backgrounds. This algorithm leveraged on the Gaussian thresholding algorithm. The algorithm differs from the conventional Gaussian algorithm as it dynamically generated the blocksize to apply threshing to the image. This ensured that, unlike conventional image segmentation, images were processed area-wise (in pixels) as specified by the algorithm at each instance. The system was implemented using Python 3.6 programming language. Experimentation involved fifty different images with complex backgrounds. The results showed that the system was able to extract English character-based texts from images with complex backgrounds with 69.7% word-level accuracy and 81.9% character-level accuracy. The proposed method in this study proved to be more efficient as it outperformed the existing methods in terms of the character level percentage accuracy.
- ItemAn Adaptive Thresholding Algorithm-Based Optical Character Recognition System for Information Extraction in Complex Images(Journal of Computer Science, 2020-06-12) Ogunde, Adewale OpeoluwaExtracting texts from images with complex backgrounds is a major challenge today. Many existing Optical Character Recognition (OCR) systems could not handle this problem. As reported in the literature, some existing methods that can handle the problem still encounter major difficulties with extracting texts from images with sharp varying contours, touching word and skewed words from scanned documents and images with such complex backgrounds. There is, therefore, a need for new methods that could easily and efficiently extract texts from these images with complex backgrounds, which is the primary reason for this work. This study collected image data and investigated the processes involved in image processing and the techniques applied for data segmentation. It employed an adaptive thresholding algorithm to the selected images to properly segment text characters from the image’s complex background. It then used Tesseract, a machine learning product, to extract the text from the image file. The images used were coloured images sourced from the internet with different formats like jpg, png, webp and different resolutions. A custom adaptive algorithm was applied to the images to unify their complex backgrounds. This algorithm leveraged on the Gaussian thresholding algorithm. The algorithm differs from the conventional Gaussian algorithm as it dynamically generated the blocksize to apply threshing to the image. This ensured that, unlike conventional image segmentation, images were processed area-wise (in pixels) as specified by the algorithm at each instance. The system was implemented using Python 3.6 programming language. Experimentation involved fifty different images with complex backgrounds. The results showed that the system was able to extract English character-based texts from images with complex backgrounds with 69.7% word-level accuracy and 81.9% character-level accuracy. The proposed method in this study proved to be more efficient as it outperformed the existing methods in terms of the character level percentage accuracy.
- ItemAn Adaptive Thresholding Algorithm-Based Optical Character Recognition System for Information Extraction in Complex Images(Journal of Computer Science, 2020) Oguntunde, BosedeExtracting texts from images with complex backgrounds is a major challenge today. Many existing Optical Character Recognition (OCR) systems could not handle this problem. As reported in the literature, some existing methods that can handle the problem still encounter major difficulties with extracting texts from images with sharp varying contours, touching word and skewed words from scanned documents and images with such complex backgrounds. There is, therefore, a need for new methods that could easily and efficiently extract texts from these images with complex backgrounds, which is the primary reason for this work. This study collected image data and investigated the processes involved in image processing and the techniques applied for data segmentation. It employed an adaptive thresholding algorithm to the selected images to properly segment text characters from the image’s complex background. It then used Tesseract, a machine learning product, to extract the text from the image file. The images used were coloured images sourced from the internet with different formats like jpg, png, webp and different resolutions. A custom adaptive algorithm was applied to the images to unify their complex backgrounds. This algorithm leveraged on the Gaussian thresholding algorithm. The algorithm differs from the conventional Gaussian algorithm as it dynamically generated the blocksize to apply threshing to the image. This ensured that, unlike conventional image segmentation, images were processed area-wise (in pixels) as specified by the algorithm at each instance. The system was implemented using Python 3.6 programming language. Experimentation involved fifty different images with complex backgrounds. The results showed that the system was able to extract English character-based texts from images with complex backgrounds with 69.7% word-level accuracy and 81.9% character-level accuracy. The proposed method in this study proved to be more efficient as it outperformed the existing methods in terms of the character level percentage accuracy.
- ItemAnalysis of Employees' Engagement and Retention of Selected Banking Industry in Lagos State, Nigeria.(Uniosun Journal of Employment Relations and Management., 2019-01) Olaniyan, Oluwabunmi OmobolanleRetention of employees had become a major issue confronting the Nigerian banking industry; due to this course effective employee engagement was discovered to aid the retention of their valued employees. The banking sector in Nigeria faced the problems of wrong employee engagement and posed employee retention challenges. This paper examined the analysis of employees' engagement and retention of selected banking industry in Lagos State, Nigeria. The descriptive survey research design was adopted and the target population comprised 4,084 staff of the head offices of the selected banks in Lagos State. Sample size of 678 was used. A structured questionnaire was adapted, validated and used to collect data for the study. Simple random sampling technique was used to select the sample size and the response rate was 92.5%. Data were analyzed using descriptive and inferential statistical technique. Findings revealed that Employee engagement had positive and significant effect on employee retention of selected banks in Lagos State (â = 2 0.287; F = 109.815; R = 0.134; p<0.05). It was concluded that Employee (1,712) engagement had positive and significant effect on employee retention in selected deposit money banks in Lagos State, Nigeria. Therefore, it was recommended that there should be proper employee engagement in an organization.
- ItemAnalytic Hierarchy Process Model for Evaluation of Mobile Health Applications(2019-07) Olaniyan, Oluwabunmi OmobolanleAssessing the usability of mHealth apps is still a herculean task among software usability researchers/engineers as evaluating the usability attributes of these apps require substantial efforts from a wide range of knowledge domains and prospective users. Most usability models possess numerous attributes that can be adequately used to assess these apps but current usability techniques cannot effectively rank numerous qualitative and quantitative usability attributes simultaneously. Hence, the main objective of this work is to rank and prioritize the usability attributes embedded in a model for mobile app evaluation purposes. The model was designed hierarchically based on People at the Center of Mobile Application Development (PACMAD) MODEL and the Integrated Measurement Model (IMM). Attributes considered include efficiency, effectiveness, satisfaction, learnability, operability, user interface aesthetics and universality. They were ranked using their respective priority weights based on the Analytic Hierarchy Process (AHP). Pairwise comparison matrix was formulated based on decision makers’ judgements that were aggregated and normalized. Consistency of decision makers’ judgements was obtained using Saaty’s Eigen value and Eigen vector approach as a result of their simplicity and accuracy. Results of evaluation showed that efficiency and effectiveness had the highest priorities with 30% and 27% while satisfaction and user interface aesthetics had the lowest ranks with 6% and 5% respectively. Overall AHP group consensus results was 68%. In conclusion, it was established that the mathematical technique used is a powerful yet simple tool that has the ability to evaluate both the quantitative and qualitative usability attributes simultaneously. The work presented the assessment of a unified framework that combined the judgements from multiple levels of mHealth apps usability evaluation process. It is recommended that further studies extend the usability model used by increasing the number of usability attributes and for the evaluation to be done using other Multi-criteria Decision Making (MCDM) approaches and for the results obtained to be compared so as to be able to determine the differences or relationships between other MCDM techniques on usability models.
- ItemAn Android Based Blood Bank Information Retrieval System(Dovepress, 2019-10) Kayode, Aderonke AnthoniaBackground: Blood Bank record keeping has been carried out manually over the past decades using paper file management system which is slow for information retrieval and processing and also prone to errors in an emergency situation. Materials and methods: This research work solves the above-mentioned problem with the development of both web-based and Android-based blood bank information retrieval system. The web application is used by various blood banks system administrators to update their available blood inventory information and the mobile application which has the mobile search engine is used to search for blood supplies from the registered blood banks. Results and conclusion: The system also has a feature that allows registered blood banks to send a notification to registered blood donors on the application requesting for blood donation.
- ItemApplication of Data Mining Algorithms for Feature Selection and Prediction of Diabetic Retinopathy(Springer Nature Switzerland, 2019-06) Kayode, Aderonke AnthoniaDiabetes Retinopathy is a disease which results from a prolonged case of diabetes mellitus and it is the most common cause of loss of vision in man. Data mining algorithms are used in medical and computer fields to find effective ways of forecasting a particular disease. This research was aimed at determining the effect of using feature selection in predicting Diabetes Retinopathy. The dataset used for this study was gotten from diabetes retinopathy Debrecen dataset from the University of California in a form suitable for mining. Feature selection was executed on diabetes retinopathy data then the Implementation of k-Nearest Neighbour, C4.5 decision tree, Multi-layer Perceptron (MLP) and Support Vector Machines was conducted on diabetes retinopathy data with and without feature selection. There was access to the algorithms in terms of accuracy and sensitivity. It is observed from the results that, making use of feature selection on algorithms increases the accuracy as well as the sensitivity of the algorithms considered and it is mostly reflected in the support vector machine algorithm. Making use of feature selection for classification also increases the time taken for the prediction of diabetes retinopathy.
- ItemApplication of Data Mining and Knowledge Management for Business Improvement: An Exploratory Study(Foundation of Computer Science FCS, New York, USA, 2015-02) Kayode, Aderonke AnthoniaIn recent years, there have been a lot of approaches employed by organizations to satisfy their customers and gain competitive advantage. Continuous development of Information System applications is also changing the ways in which businesses are conducted. From scanning barcodes at point of sale (POS) to shopping on the web, businesses are generating large volume of data about products and consumers which are being stored in different data repositories. While a lot of useful knowledge about products, sales and customers that can assist in business decisions are locked away in these databases unexploited. However, the need for organizations to survive in this dynamic business environment depends on how proactive they change these data into useful knowledge which can aid value creation. Presently, customer relationship management and marketing turn out to be the domains which have the potentials to utilize data mining techniques for decision support. This paper examines how business can improve on their performance through utilization of knowledge management (KM) and data mining (DM) applications to manage and support their strategies. Lastly, synergies and challenges of implementation of KM and DM as a tool in business are also critically analysed.
- ItemAssessment of Selected Data Mining Classification Algorithms for Analysis and Prediction of Certain Diseases(University of Ibadan Journal of Science and Logics in ICT Research (UIJSLICTR), 2020-03) Oguntunde, BosedeMedical science generates large volumes of data stored in medical repositories that could be useful for extraction of vital hidden information essential for diseases diagnosis and prognosis. In recent times, the application of data mining to knowledge discovery has shown impressive results in disease analysis and prediction. This study investigates the performance of three data mining classification algorithms, namely decision tree, Naïve Bayes, and k-nearest neighbour in predicting the likelihood of the occurrence of chronic kidney disease, breast cancer, diabetes, and hypothyroid. The datasets which were obtained from the UCI Machine were split into 60% for training and 40% for testing on the one hand and 70% for training and 30% for testing on the other hand. The performance parameters considered include classification accuracy, error rate, execution time, confusion matrix, and area under the curve. Waikato Environment for Knowledge Analysis (WEKA) was used to implement the algorithms. The findings from the analysis showed that decision tree recorded the highest prediction accuracy followed by the Naïve Bayes and k-NN algorithm while k-NN recorded the minimum execution time on the four datasets. However, k-NN also has the largest average percentage error recorded on the datasets. The findings, therefore, suggest that the performance of these classification algorithms could be influenced by the type and size of datasets.
- ItemAssessment of Selected Data Mining Classification Algorithms for Analysis and Prediction of Certain Diseases(2020) Odim, MbaMedical science generates large volumes of data stored in medical repositories that could be useful for extraction of vital hidden information essential for diseases diagnosis and prognosis. In recent times, the application of data mining to knowledge discovery has shown impressive results in disease analysis and prediction. This study investigates the performance of three data mining classification algorithms, namely decision tree, Naïve Bayes,and k-nearest neighbour in predicting the likelihood of the occurrence of chronic kidney disease, breast cancer, diabetes, and hypothyroid. The datasets which were obtained from the UCI Machine were split into 60% for training and 40% for testing on the one hand and 70% for training and 30% for testing on the other hand. The performance parameters considered include classification accuracy, error rate, execution time, confusion matrix, and area under the curve. Waikato Environment for Knowledge Analysis (WEKA) was used to implement the algorithms. The findings from the analysis showed that decision tree recorded the highest prediction accuracy followed by the Naïve Bayes and k-NN algorithm while k-NN recorded the minimum execution time on the four datasets. However, k-NN also has the largest average percentage error recorded on the datasets. The findings, therefore, suggest that the performance of these classification algorithms could be influenced by the type and size of datasets.
- ItemAtomic Commit in Distributed Database Systems: The Approaches of Blocking and Non-Blocking Protocols(International Journal of Engineering Research & Technology, 2014-10) Olowookere, Toluwase AyobamiIn distributed database systems, the primary need for commit protocols is to maintain the atomicity of distributed transactions. Atomic commitment issue is of prime importance in the distributed system and the issue becomes more necessary to deal with if some of the sites participating in the execution of the transaction commitment fail. Several atomic commit protocols have evolved to terminate distributed transactions. This paper presents an overview of a distributed transaction model, and a description of the two phase commit (2PC) protocol (which is blocking) and the one phase (1PC) commit protocols (which is non-blocking). This paper further examines the assumptions of these commit protocols in their bid to addressing the atomic commitment issue in distributed database systems. By restricting possible encountered failure to site failure, drawbacks in the assumptions of these atomic commit protocols were identified, which clearly show that the nonblocking protocol studied addresses the drawbacks of the widely used blocking protocol, 2PC, but in itself is no messiah (as it also constitutes drawbacks in practice). This work will spur other researchers to a more vigorous reconsideration of the 1PC nonblocking protocol.
- ItemAn Automated Mammogram Classification System using Modified Support Vector Machine.(Dovepress, 2019-08-15) Kayode, Aderonke AnthoniaPurpose: Breast cancer remains a serious public health problem that results in the loss of lives among women. However, early detection of its signs increases treatment options and the likelihood of cure. Although mammography has been established to be a proven technique of examining symptoms of cancer in mammograms, the manual observation by radiologists is demanding and often prone to diagnostic errors. Therefore, computer aided diagnosis (CADx) systems could be a viable alternative that could facilitate and ease cancer diagnosis process; hence this study. Methodology: The inputs to the proposed model are raw mammograms downloaded from the Mammographic Image Analysis Society database. Prior to the classification, the raw mammograms were preprocessed. Then, gray level co-occurrence matrix was used to extract fifteen textural features from the mammograms at four different angular directions: θ={0°, 45°, 90°, 135°}, and two distances: D={1,2}. Afterwards, a two-stage support vector machine was used to classify the mammograms as normal, benign and malignant. Results: All of the 37 normal images used as test data were classified as normal (no false positive) and all 41 abnormal images were correctly classified to be abnormal (no false negative), meaning that the sensitivity and specificity of the model in detecting abnormality is 100%. After the detection of abnormality, the system further classified the abnormality on the mammograms to be either “benign” or “malignant”. Out of 23 benign images, 21 were truly classified as benign. Also, out of 18 malignant images, 17 were truly classified to be malignant. From these findings, the sensitivity, specificity, positive predictive value, and negative predictive value of the system are 94.4%, 91.3%, 89.5%, and 95.5%, respectively. Conclusion: This article has further affirmed the prowess of automated CADx systems as a viable tool that could facilitate breast cancer diagnosis by radiologists.
- ItemAutomatic Segmentation of Retinal Blood Vessels of Diabetic Retinopathy Patients using Dempster-shafer Edge Based Detector(ANSInet, 2019-06-15) Kayode, Aderonke AnthoniaBackground and Objective: Diabetic Retinopathy (DR) is a micro-vascular complication of diabetes which results in the alteration or total damage of retinal blood vessels. This is responsible for most partial loss of sight and blindness among diabetic patients across nations of the world. Early examination of retinal blood vessels could help in the detection and diagnosis of the symptoms of DR thereby curtailing its effects. Methodology: Dempster-shafer edge based detector was used to segment retinal blood vessels from retinal images sourced from Digital Retinal Image for Vessel Extraction (DRIVE). Prior to the segmentation, median filter, Contrast Limited Adaptive Histogram Equalization (CLAHE) and mahalanobis distance algorithms were used to preprocess the raw retinal images so that accurate blood vessels detection and segmentation will be achieved. Results: A segmentation accuracy of 0.9765 was recorded when receiver operating characteristics of the technique was computed. This showed that an acceptable degree of blood vessel segmentation was achieved. Furthermore, the segmented blood vessels are publicly available for academic and research purposes. Conclusion: Dempster-shafer edge based detector has been further shown to be an effective algorithm for blood vessels segmentation in healthy as well as DR retinal images.
- ItemBoothstrap Method for Measures of Statistical Accuracy(African Journal of Pure and Applied Sciences, 2008) Oguntunde, BosedeWe introduced bootsrap method for dependent data structure and emphasis is on the construction of efficient inferential procedures for an estimator as a measure of its statistical accuracy, such as standard error, bias, ratio, coefficient of variantion and root mean square error. it was illustaretd with real time series data structure.
- ItemBuilding Data-Driven Decision Support System for Pragmatic Leadership.(EDUCERE - Journal of Educational research, 2006) Oguntunde, BosedeDecision Support System (DSS) is an interactive software-based system that assists leaders (decision makers) compile, analyze and manipulate information from raw-data documents, knowlede frameworks and/or business models to identify and solve problems and make decisions. In general, DSS's design and implementations are classified as data-driven, model-driven, knowledge-driven, document-driven and communication driven. Taxonomically, DSS could be passive, active, cooperative. A passive DSS is a system that aids the process of decision making, but that cannot bring out decision, suggestion or solutions. an active DSS can bring out such. A cooperative DSS allows the decision maker modify, complete, or refine the decision suggestions provided by the system, before sending them back to the system for validation. This paper focused on cooperative Data Driven DSS. Data-Drive DSS emphasizes access to and manipulation of time-series of internal organizatinal data and at times external data using Database Queries and On-Line Analytical Processing (OLAP0 tools.Thus, help managers (leaders) make prompt decision from the available data and models easily. The methodology forthe research is IDEFIX approach, nomally referred to as BOTTOM_UP approach to project work. The DSS is to speed-up data analysis for prompt decision-making through data model of relational Database Management System (RDBMS). the implementation optimizes the use of Mathematical Relational Algebra model for various report generation. it is implementable at any level, for practical, reality and pragmatic leadership qualities.
- ItemComparative Analysis of Some Programming Languages(Transnational Journal of Science and Technology, 2012) Oguntunde, BosedeProgramming languages are used for controlling the behavior of computer machines. Several programming languages exist and new are being created always. These programming languages become popular with use to different programmers because there is always a tradeoff between ease of learning and use, efficiency and power of expression. In this work we examine six programming languages two from different groups of scientific, non scientific and object oriented programming languages. We present an algorithm for performing combination and permutation to implement the comparison. Two parameters, the memory consumption and running time requirement are tested and objected oriented programming languages perform better in term of their running times although same could not be said of them in term of memory requirements.
- ItemComparative Study and Detection of COVID-19 and Related Viral Pneumonia using a Fine-tuned Deep Transfer Learning(2021) Olaniyan, Oluwabunmi Omobolanle
- ItemComparative Study and Detection of COVID-19 and Related Viral Pneumonia Using Fine-Tuned Deep Transfer Learning(Springer- Intelligent Systems Reference Library, 2021) Olowookere, Toluwase AyobamiCoronavirus (or COVID-19), which came into existence in 2019, is a viral pandemic that causes illness and death in the lives of human. Relentless research efforts have been on to improve key performance indicators for detection, isolation and early treatment. The aim of this study is to conduct a comparative study on the detection of COVID-19 and develop a Deep Transfer Learning Convolutional Neural Network (DTL-CNN) Model to classify chest X-ray images in a binary classification task (as either COVID-19 or Normal classes) and a three-class classification scenario (as either COVID-19, Viral-Pneumonia or Normal categories). Dataset was collected from Kaggle website containing a total of 600 images, out of which 375 were selected for model training, validation and testing (125 COVID-19, 125 Viral Pneumonia and 125 Normal). In order to ensure that the model generalizes well, data augmentation was performed by setting the random image rotation to 15 degrees clockwise. Two experiments were performed where a fine-tuned VGG-16 CNN and a fine-tuned VGG-19 CNN with Deep Transfer Learning (DTL) were implemented in Jupyter Notebook using Python programming language. The system was trained with sample datasets for the model to detect coronavirus in chest X-ray images. The fine-tuned VGG-16 and VGG-19 DTL models were trained for 40 epochs with batch size of 10, using Adam optimizer for weight updates and categorical cross entropy loss function. A learning rate of 1e−2 was used in fine-tuned VGG-16 while 1e−1 was used in fine-tuned VGG-19, and was evaluated on the 25% of the X-ray images. It was discovered that the validation and training losses were significantly high in the earlier epochs and then noticeably decreases as the training occurs in more subsequent epochs. Result showed that the fine-tuned VGG-16 and VGG-19 models, in this work, produced a classification accuracy of 99.00% for binary classes, and 97.33% and 89.33% for multi-class cases respectively. Hence, it was discovered that the VGG-16 based DTL model classified COVID-19 better than the VGG-19 based DTL model. Using the best performing fine-tuned VGG-16 DTL model, tests were carried out on 75 unlabeled images that did not participate in the model training and validation processes. The proposed models, in this work, provided accurate diagnostics for binary classification (COVID-19 and Normal) and multi-class classification (COVID-19,Viral Pneumonia and Normal), as it outperformed other existing models in the literature in terms of accuracy.
- ItemComparative Study and Detection of COVID-19 and Related Viral Pneumonia Using Fine-Tuned Deep Transfer Learning(2021) Odim, MbaCoronavirus (or COVID-19), which came into existence in 2019, is a viral pandemic that causes illness and death in the lives of human. Relentless research efforts have been on to improve key performance indicators for detection, isolation and early treatment. The aim of this study is to conduct a comparative study on the detection of COVID-19 and develop a Deep Transfer Learning Convolutional Neural Network (DTL-CNN) Model to classify chest X-ray images in a binary classification task (as either COVID-19 or Normal classes) and a three-class classification scenario (as either COVID-19, Viral-Pneumonia or Normal categories). Dataset was collected from Kaggle website containing a total of 600 images, out of which 375 were selected for model training, validation and testing (125 COVID-19, 125 Viral Pneumonia and 125 Normal). In order to ensure that the model generalizes well, data augmentation was performed by setting the random image rotation to 15 degrees clockwise. Two experiments were performed where a fine-tuned VGG-16 CNN and a fine-tuned VGG-19 CNN with Deep Transfer Learning (DTL) were implemented in Jupyter Notebook using Python programming language. The system was trained with sample datasets for the model to detect coronavirus in chest X-ray images. The fine-tuned VGG-16 and VGG-19 DTL models were trained for 40 epochs with batch size of 10, using Adam optimizer for weight updates and categorical cross entropy loss function. A learning rate of 1e−2 was used in fine-tuned VGG-16 while 1e−1 was used in fine-tuned VGG-19, and was evaluated on the 25% of the X-ray images. It was discovered that the validation and training losses were significantly high in the earlier epochs and then noticeably decreases as the training occurs in more subsequent epochs. Result showed that the fine-tuned VGG-16 and VGG-19 models, in this work, produced a classification accuracy of 99.00% for binary classes, and 97.33% and 89.33% for multi-class cases respectively. Hence, it was discovered that he VGG-16 based DTL model classified COVID-19 better than the VGG-19 based DTL model. Using the best performing fine-tuned VGG-16 DTL model, tests were carried out on 75 unlabeled images that did not participate in the model training and validation processes. The proposed models, in this work, provided accurate diagnostics for binary classification (COVID-19 and Normal) and multi-class classification (COVID-19, Viral Pneumonia and Normal), as it outperformed other existing models in the literature in terms of accuracy.
- ItemA Comparative Study of some Traditional and Modern Cryptographic Techniques.(International Journal of Engineering & Management Research, 2017) Oguntunde, BosedeIn the era of Internet and networks applications with the attendant prevalence of virus attacks, and intrusion of various kinds and intensities, information security has become a major challenge. There is a demand for stronger encryption techniques which are very hard to crack. The role of cryptography in the field of network security has become very pivotal. There are a wide range of cryptographic algorithms that are used for securing networks. There are also continuous research efforts to formulate new cryptographic algorithms aimed at evolving more advanced techniques for more secured ommunication. This work analyses some cryptographic techniques based on programming approaches and performances using certain criteria such as the block size, key length, and encryption time and security issues. Blowfish technique was found to offer a better performance compared to AES and RSA in terms of average encryption time. However, DES and Blowfish share the smallest block size of 64 bit while DES has the least key length of 56 bits.