Elif DuymazA.. , Fatma DuranE.

Machine learning (ML) is a subset of artificial intelligence (AI) that focuses on developing algorithms capable of instantiating different aspects of human intelligence1. Initially proposed by McCulloch and Pitts and inspired by the functioning of neural networks in the brain, the model is based on a mathematical foundation and composed of linear equations2. As nonlinear problems proved to be beyond the capabilities of logical models within this framework and their analytical capacity became increasingly inefficient over time, Kernel and Bayesian graphical models have gained popularity as alternatives for development3-5. Deep learning (DL) techniques have emerged as a leading solution in ML hardware development, owing to their highly accurate architecture which provides low error rates and improves the accuracy margin significantly6,7. In DL, each layer of the network takes as input the outputs of the previous stage, which are obtained by applying a non-linear transformation8. Through this iterative process of calculating the rate of change, the model is gradually improved until it reaches an optimal level of accuracy9. This process has enabled the selection of parameters that can reflect data-specific characteristics in inter-layer transfers, as well as the separation of classifications required in image processing based on their topological attributes10. Genomic research is one of the fields that benefit from the opportunities provided by this powerful technique11,12.

diyagram, şematik içeren bir resim

Açıklama otomatik olarak oluşturuldu

Figure 1. Illustrates the basic mechanism of deep learning featuring13. (a) the structure of the human neuron network, (b) the mathematical model of a neural cell, (c) biological synapses, and (d) synapses of artificial neural networks.

Deep Learning Architecture

ML models can be built using either supervised or unsupervised learning systems. The supervised learning approach is based on the gradual improvement of algorithms through the use of prior knowledge to guide the training process14. Supervised learning typically employs classification and regression methodologies, and provides causal predictions. In contrast, unsupervised learning takes a different approach by identifying and clustering unknown parameters in the data15. DL models, on the other hand, utilize a three-stage system that can incorporate both supervised and unsupervised learning methods16. In the first stage, the data is processed and analyzed to identify basic information, which is then extracted and visualized in a comprehensible way. The second stage involves building a model in accordance with the data structure and training it using different support vectors. In the final stage, models that successfully pass these steps and performance analysis according to the outputs obtained from these steps are finalized with validation and interpretation techniques17-19. These information processing stages are hierarchical and utilize various architectures with a non-linear, multi-layer working principle. Each architecture includes algorithms that cover a different domain, resulting in a diverse range of applications for DL techniques20. DL architecture is generally categorized into three classes: generative, hybrid, and discriminative21. Generative deep architecture is utilized to model high-order correlation properties in data that are correlated and amenable to synthesis22. Bayesian statistical distributions play an important role in elevating this architecture to the discriminative class23. In the discriminative architecture, the goal is to achieve results in stochastic network structures with conditional models and inter-machine parallelization, in conjunction with pattern classification24. Another architecture is the hybrid architecture, which is based on learning the parameters that determine the discriminative criteria to optimize the use of generative models25-26. This diversity offers the possibility to explore the hidden classifications and details in variation structures.

Deep Learning Algorithms

The fundamental unit of deep architecture is the Artificial Neural Network (ANN), which is inspired by the structure of neural networks in the human brain27. However, the ANN differs in that it passes the input directly to a neuron and receives the output directly from the neuron. If the signals received by the neuron nucleus exceed a threshold value (when the signals are at least -55mV and each neuron has a potential of 5mV), the axon generates an action potential, which carries a high voltage message28,29. Artificial Neural Networks (ANN), which are the most basic module of deep architecture, are generally based on the logistic regression model and require at least one hidden layer between input and output. The number of hidden layers is the main factor that determines the character of the so-called ‘deep’ theme of DL30,31. Convolutional Neural Networks (CNNs) are widely used for image analysis and classification due to their ability to provide accurate predictions by mapping inputs to outputs through a mathematical function that outputs values between 0 and 132. This module performs feature mapping as the first step to analyze data and provide output. The filtered regions are evaluated and eliminated in the pooling layer based on their matrix values in terms of size and parameter accuracy. Finally, the obtained data is converted into a one-dimensional array format from the layered matrices in the previous steps, as neural networks can only receive input data in one-dimensional arrays33,34. Once the data is transformed into a one-dimensional array, it can be used to train neural networks and complete the learning process. The classification process is then performed by the neural network. However, the steps involved in this process can be modified or increased based on the requirements of the data and the process35. Recall that another crucial module is Recurrent Neural Networks (RNNs). Thanks to their temporal memory capabilities, they are capable of retaining information from the recent past and relating it to the present, thereby providing prediction and classification models for future steps36,37. Recurrent Neural Network (RNN) uses its memory to store features based on inputs and learns from this information38. However, gaps in the contexts between successive events in the discrete function structure can negatively impact the accuracy rates of predictions39. Hochreiter and Schmidhuber proposed the use of memory cells in time sequences to retain information about past inputs and overcome the limitations of traditional recurrent neural networks. This led to the development of Long Short Term Memory (LSTM) structures, which are specialized recurrent neural networks designed for processing sequential data40. This architecture exhibits the ability to store information from past inputs in its memory, but it differs from the conventional RNN model by offering a greater memory capacity. These investigations have demonstrated promising results in retrospectively examining changes in the state of health of a given individual over time41. The Gated Recurrent Unit (GRU) offers a simplified approach that avoids the current constraint, accomplished through the utilization of two gates that can minimize computational intensity42. The GRU offers a simplified approach that avoids the current constraint, consisting of two gates that reduce computational intensity. One gate erases unnecessary information from memory, while the other updates the relevant information during transitions between gates. This structure has demonstrated similar success rates to LSTM in natural language processing and speech signal modeling43-45.

Ijms 22 06032 g001

Figure 2. Illustrates the various deep learning algorithms and how they interact with each other46. These include feed-forward neural networks (FFNN), recurrent neural networks (RNN), convolutional neural networks (CNN), graph convolutional networks (GCN), bidirectional recurrent neural networks (BRNN), and long short-term memory (LSTM) structures.

Using Deep Learning in Genomic Applications

DL algorithms have shown potential in addressing the limitations of traditional prediction and detection systems used in genomic studies, particularly in the analysis of outputs generated by next-generation sequencing. This is because these algorithms are capable of improving stochastic processes and chaotic structures that require change and evolution47-49

Differences in chromatin conformations can lead to structural differences and irregularities in gene expression in protein sequencing analyses50. The challenge of differentiating these differences leads to significant limitations in field studies. To overcome these challenges, Dsouza et al. utilized the LSTM model to identify genomic elements such as nuclear compartments and transcription factors related to conformation51. The DeepRiPe model, designed for detecting binding sites and exhibiting high performance in sequence analysis, is based on GRU (gated recurrent unit) and CNN algorithms. The model takes the type of binding sites of RNA-binding proteins as input and aims to provide a high-performance prediction model with a multilayer neural network52. The prediction of binding sites and the analysis of protein interactions between regulatory regions have a significant impact on the field of pharmacogenomics53-56. Important steps are rapidly being taken with DL studies in gene-based prioritized areas. Determining the topological function information of gene interaction networks provides an important source of information as well as high-performance models to be used in the discovery of heterogeneous structures of genes and proteins. Thus, it contributes to the recently developed analysis methodologies with algorithms that combine Pearson correlation and Markov chain rules to examine different causal processes57,58. The normalization of large matrices in single-cell RNA sequencing (scRNA-seq) analyses has become possible, providing empirical values that can be used in identifying new diseases or cell types in immunology studies. This approach has enabled the analysis of the increasing amount of knowledge with high accuracy and efficiency60,61. A recent study utilizing next-generation sequencing reported that the DeepVariant tool has demonstrated superior performance to existing algorithms in detecting a high rate of insertions and deletions in whole exome sequencing analyses62. Genome sequencing experiments provide opportunities to evaluate various background signals, identify patterns, and statistically track peak levels63,64. Identifying control mechanisms of gene expression is critical in both basic and disease biology. In a recent study, the Decode DL model was utilized to detect post-transcriptional RNA binding factors, contributing to the identification of these mechanisms65. The Deepchrome model utilized modification signals to calculate combinatorial changes in histone modifications, which are crucial factors in gene regulation controls. This model exhibited strong predictive capabilities and offered a heuristic approach to epigenetic mechanisms through an optimization-based mapping technique66. DL models have been developed to predict modification sites for epigenomic mechanisms such as DNA methylation, histone modifications, and non-coding RNA67-69. The lack of response in gene expression systems and the inadequacy of traditional machine learning models have hindered the classification of cancer and its subtypes. However, DL models have recently been used in genomic studies to classify clinical outcomes in various cancer types, improve progression-free survival models, and identify potential synergistic drug combinations70-72. The NeuSomatic algorithm, which plays an important role in the detection of somatic mutations with high accuracy rates, which is still a critical requirement in cancer studies, is a statistical model used in tumor sequencing data using CNN. This model was the first to use CNNs for somatic mutation detection and has achieved high accuracy rates, making it a promising tool in cancer genomics research73. Similarly, detailed analysis of different variation calls and prediction methods are diversified using RNN, CNN, and LSTM algorithms73-76.

Fig. 1

Figure 3. Deep learning study areas are shown77. Within deep learning, the processing of biological data is accomplished through various matching and classification algorithms. The general application topics encompass sequencing technologies, gene expression models, biological imaging techniques, neuro-imaging tools, and body/brain-machine interfaces.

In the field of neuroscience, DL models are used to optimize traditional methods for describing the spatial characteristics of the coordination between evoked brain activity and information processing78-80. A recent study employed deep neural networks (DNNs) to predict significant changes in the spatial characteristics of neurons in the visual cortex by regressing their spatial characteristics on multi-voxel activity, showcasing the potential of DL models in optimizing traditional methods for describing the coordination between evoked brain activity and information processing in neuroscience81. The investigation of genetic factors underlying neuropsychiatric disorders has the potential to reveal novel therapeutic targets for these conditions82-84. The DeepGWAS model, which utilizes deep learning, offers an opportunity to extensively analyze the functional connectivity between various regions of the brain and has shown improved diagnostic accuracy for major depressive disorders85. In order to improve the accuracy of diagnosis for neurodegenerative diseases, traditional methods such as patient history and empirical tests may not always suffice. In a recent study by Lanjewar et al., a model utilizing 6400 magnetic resonance (MRI) images at various stages of dementia was developed using a combination of convolutional neural network (CNN) and nearest neighbor (KNN) algorithms. The resulting model showed a high accuracy rate of 99.58% in diagnosing Alzheimer’s disease86. In addition, DL also plays an important role in medical imaging. Various medical imaging models have been created to address the constraints of large dimensions and detailed classification of datasets. Thus, DL algorithms have made significant contributions to the construction of scenarios for a comprehensive examination and wide-view analysis64,87-89.DL has enabled better identification of different features of processed datasets and detailed information specific to the data can be extracted. Its development based on the backpropagation algorithm has shown high performance in exploring complex architecture in large datasets. Particularly in biological studies, DL has achieved significant success in removing various classification and regression constraints, increasing the success rates in diagnosing and treating diseases by generating accurate prediction models. In this regard, contemporary generation technology experiences will be a tremendous guiding light for significant advancements.

References:

1.     Mohtasham Moein, M., Saradar, A., Rahmati, K., Ghasemzadeh Mousavinejad, S. H., Bristow, J., Aramali, V., & Karakouzian, M. (2023). Predictive models for concrete properties using machine learning and deep learning approaches: A review. Journal of Building Engineering, 63, 105444. https://doi.org/10.1016/J.JOBE.2022.105444

2.     McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. The Bulletin of Mathematical Biophysics, 5(4), 115–133. https://doi.org/10.1007/BF02478259

3.     Reinoso-Peláez, E. L., Gianola, D., & González-Recio, O. (2022). Genome-Enabled Prediction Methods Based on Machine Learning. Methods in Molecular Biology, 2467, 189–218. https://doi.org/10.1007/978-1-0716-2205-6_7/COVER

4.     Fortuin, V. (2022). Priors in Bayesian Deep Learning: A Review. International Statistical Review, 90(3), 563–591. https://doi.org/10.1111/INSR.12502

5.     Ahmadi, N., Adiono, T., Purwarianti, A., Constandinou, T. G., & Bouganis, C. S. (2022). Improved Spike-Based Brain-Machine Interface Using Bayesian Adaptive Kernel Smoother and Deep Learning. IEEE Access, 10, 29341–29356. https://doi.org/10.1109/ACCESS.2022.3159225

6.     Hinton, G. E., Osindero, S., & Teh, Y. W. (2006). A Fast Learning Algorithm for Deep Belief Nets. Neural Computation, 18(7), 1527–1554. https://doi.org/10.1162/NECO.2006.18.7.1527

7.     Hinton, G. E., & Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. Science, 313(5786), 504–507. https://doi.org/10.1126/SCIENCE.1127647/SUPPL_FILE/HINTON.SOM.PDF

8.     Kozma, R., Ilin, R., & Siegelmann, H. T. (2018). Evolution of Abstraction Across Layers in Deep Learning Neural Networks. Procedia Computer Science, 144, 203–213. https://doi.org/10.1016/J.PROCS.2018.10.520

9.     Aggarwal, H. K., Mani, M. P., & Jacob, M. (2019). MoDL: Model-Based Deep Learning Architecture for Inverse Problems. IEEE Transactions on Medical Imaging, 38(2), 394–405. https://doi.org/10.1109/TMI.2018.2865356

10. Nguyen, T. Q., Weitekamp, D., Anderson, D., Castello, R., Cerri, O., Pierini, M., Spiropulu, M., & Vlimant, J. R. (2019). Topology Classification with Deep Learning to Improve Real-Time Event Selection at the LHC. Computing and Software for Big Science, 3(1), 1–14. https://doi.org/10.1007/S41781-019-0028-1/TABLES/3

11. Talukder, A., Barham, C., Li, X., & Hu, H. (2021). Interpretation of deep learning in genomics and epigenomics. Briefings in Bioinformatics, 22(3), 1–16. https://doi.org/10.1093/BIB/BBAA177

12. Routhier, E., & Mozziconacci, J. (2022). Genomics enters the deep learning era. PeerJ, 10, e13613. https://doi.org/10.7717/PEERJ.13613/FIG-2

13. Meng, Z., Hu, Y., & Ancey, C. (2020). Using a data driven approach to predict waves generated by gravity driven mass flows. Water (Switzerland), 12(2). https://doi.org/10.3390/W12020600

14. Burrello, J., Burrello, A., Vacchi, E., Bianco, G., Caporali, E., Amongero, M., Airale, L., Bolis, S., Vassalli, G., Cereda, C. W., Mulatero, P., Bussolati, B., Camici, G. G., Melli, G., Monticone, S., & Barile, L. (2022). Supervised and unsupervised learning to define the cardiovascular risk of patients according to an extracellular vesicle molecular signature. Translational Research, 244, 114–125. https://doi.org/10.1016/J.TRSL.2022.02.005

15. Alloghani, M., Al-Jumeily, D., Mustafina, J., Hussain, A., & Aljaaf, A. J. (2020). A Systematic Review on Supervised and Unsupervised Machine Learning Algorithms for Data Science. 3–21. https://doi.org/10.1007/978-3-030-22475-2_1

16. Li, Y., Li, W., Xiong, J., Xia, J., & Xie, Y. (2020). Comparison of Supervised and Unsupervised Deep Learning Methods for Medical Image Synthesis between Computed Tomography and Magnetic Resonance Images. BioMed Research International, 2020. https://doi.org/10.1155/2020/5193707

17. Rahbar, M., Mahdavinejad, M., Markazi, A. H. D., & Bemanian, M. (2022). Architectural layout design through deep learning and agent-based modeling: A hybrid approach. Journal of Building Engineering, 47, 103822. https://doi.org/10.1016/J.JOBE.2021.103822

18. Gilik, A., Ogrenci, A. S., & Ozmen, A. (2022). Air quality prediction using CNN+LSTM-based hybrid deep learning architecture. Environmental Science and Pollution Research, 29(8), 11920–11938. https://doi.org/10.1007/S11356-021-16227-W/METRICS

19. Stahlschmidt, S. R., Ulfenborg, B., & Synnergren, J. (2022). Multimodal deep learning for biomedical data fusion: a review. Briefings in Bioinformatics, 23(2). https://doi.org/10.1093/BIB/BBAB569

20. Shrestha, A., & Mahmood, A. (2019). Review of deep learning algorithms and architectures. IEEE Access, 7, 53040–53065. https://doi.org/10.1109/ACCESS.2019.2912200

21. Deng, L. (2014). A tutorial survey of architectures, algorithms, and applications for deep learning. APSIPA Transactions on Signal and Information Processing, 3, e2. https://doi.org/10.1017/ATSIP.2013.9

22. Newton, D. (2019). Generative Deep Learning in Architectural Design. Https://Doi.Org/10.1080/24751448.2019.1640536, 3(2), 176–189. https://doi.org/10.1080/24751448.2019.1640536

23. Zheng, H., & Yuan, P. F. (2021). A generative architectural and urban design method through artificial neural networks. Building and Environment, 205, 108178. https://doi.org/10.1016/J.BUILDENV.2021.108178

24. Zhao, B., Feng, J., Wu, X., & Yan, S. (2017). A survey on deep learning-based fine-grained object classification and semantic segmentation. International Journal of Automation and Computing, 14(2), 119–135. https://doi.org/10.1007/S11633-017-1053-3/METRICS 

25. Cumbajin, M., Stoean, R., Aguado, J., & Joya, G. (2022). Hybrid Deep Learning Architecture Approach for Photovoltaic Power Plant Output Prediction. Lecture Notes in Networks and Systems, 379 LNNS, 26–37. https://doi.org/10.1007/978-3-030-94262-5_3/COVER

26. Abuhmed, T., El-Sappagh, S., & Alonso, J. M. (2021). Robust hybrid deep learning models for Alzheimer’s progression detection. Knowledge-Based Systems, 213, 106688. https://doi.org/10.1016/J.KNOSYS.2020.106688

27. Miotto, R., Wang, F., Wang, S., Jiang, X., & Dudley, J. T. (2018). Deep learning for healthcare: review, opportunities and challenges. Briefings in Bioinformatics, 19(6), 1236–1246. https://doi.org/10.1093/BIB/BBX044

28. Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61, 85–117. https://doi.org/10.1016/J.NEUNET.2014.09.003

29. Li, J., Zhang, T., Luo, W., Yang, J., Yuan, X. T., & Zhang, J. (2017). Sparseness Analysis in the Pretraining of Deep Neural Networks. IEEE Transactions on Neural Networks and Learning Systems, 28(6), 1425–1438. https://doi.org/10.1109/TNNLS.2016.2541681 

30. Belciug, S. (2020). Logistic regression paradigm for training a single-hidden layer feedforward neural network. Application to gene expression datasets for cancer research. Journal of Biomedical Informatics, 102. https://doi.org/10.1016/J.JBI.2019.103373

31. Bailly, A., Blanc, C., Francis, É., Guillotin, T., Jamal, F., Wakim, B., & Roy, P. (2022). Effects of dataset size and interactions on the prediction performance of logistic regression and deep learning models. Computer Methods and Programs in Biomedicine, 213, 106504. https://doi.org/10.1016/J.CMPB.2021.106504

32. Singh, S. P., Wang, L., Gupta, S., Goli, H., Padmanabhan, P., & Gulyás, B. (2020). 3D Deep Learning on Medical Images: A Review. Sensors 2020, Vol. 20, Page 5097, 20(18), 5097. https://doi.org/10.3390/S20185097

33. Alzubaidi, L., Zhang, J., Humaidi, A. J., Al-Dujaili, A., Duan, Y., Al-Shamma, O., Santamaría, J., Fadhel, M. A., Al-Amidie, M., & Farhan, L. (2021). Review of deep learning: concepts, CNN architectures, challenges, applications, future directions. Journal of Big Data 2021 8:1, 8(1), 1–74. https://doi.org/10.1186/S40537-021-00444-8

34. Aloysius, N., & Geetha, M. (2018). A review on deep convolutional neural networks. Proceedings of the 2017 IEEE International Conference on Communication and Signal Processing, ICCSP 2017, 2018-January, 588–592. https://doi.org/10.1109/ICCSP.2017.8286426

35. Garcia-Isla, G., Muscato, F. M., Sansonetti, A., Magni, S., Corino, V. D. A., & Mainardi, L. T. (2022). Ensemble classification combining ResNet and handcrafted features with three-steps training. Physiological Measurement, 43(9), 094003. https://doi.org/10.1088/1361-6579/AC8F12

36. Trieu, T., Martinez-Fundichely, A., & Khurana, E. (2020). DeepMILO: A deep learning approach to predict the impact of non-coding sequence variants on 3D chromatin structure. Genome Biology, 21(1), 1–11. https://doi.org/10.1186/S13059-020-01987-4/FIGURES/4

37. Gaafar, A. S., Dahr, J. M., & Hamoud, A. K. (2022). Comparative Analysis of Performance of Deep Learning Classification Approach based on LSTM-RNN for Textual and Image Datasets. Informatica, 46(5), 21–28. https://doi.org/10.31449/INF.V46I5.3872

38. Matsuo, Y., LeCun, Y., Sahani, M., Precup, D., Silver, D., Sugiyama, M., Uchibe, E., & Morimoto, J. (2022). Deep learning, reinforcement learning, and world models. Neural Networks, 152, 267–275. https://doi.org/10.1016/J.NEUNET.2022.03.037

39. Bai, Q., Zhou, J., & He, L. (2022). PG-RNN: using position-gated recurrent neural networks for aspect-based sentiment classification. Journal of Supercomputing, 78(3), 4073–4094. https://doi.org/10.1007/S11227-021-04019-5/METRICS

40. Hochreiter, S., & Schmidhuber, J. (1997). Long Short-Term Memory. Neural Computation, 9(8), 1735–1780. https://doi.org/10.1162/NECO.1997.9.8.1735

41. Hamayel, M. J., & Owda, A. Y. (2021). A Novel Cryptocurrency Price Prediction Model Using GRU, LSTM and bi-LSTM Machine Learning Algorithms. AI 2021, Vol. 2, Pages 477-496, 2(4), 477–496. https://doi.org/10.3390/AI2040030

42. Yu, Y., Si, X., Hu, C., & Zhang, J. (2019). A Review of Recurrent Neural Networks: LSTM Cells and Network Architectures. Neural Computation, 31(7), 1235–1270. https://doi.org/10.1162/NECO_A_01199

43. Ravanelli, M., Brakel, P., Omologo, M., & Bengio, Y. (2018). Light Gated Recurrent Units for Speech Recognition. IEEE Transactions on Emerging Topics in Computational Intelligence, 2(2), 92–102. https://doi.org/10.1109/TETCI.2017.2762739

44. Su, Y., & Kuo, C. C. J. (2019). On extended long short-term memory and dependent bidirectional recurrent neural network. Neurocomputing, 356, 151–161. https://doi.org/10.1016/J.NEUCOM.2019.04.044

45. Wang, J., Wang, P., Tian, H., Tansey, K., Liu, J., & Quan, W. (2023). A deep learning framework combining CNN and GRU for improving wheat yield estimates using time series remotely sensed multi-variables. Computers and Electronics in Agriculture, 206, 107705. https://doi.org/10.1016/J.COMPAG.2023.107705

46. Suh, D., Lee, J. W., Choi, S., & Lee, Y. (2021). Recent Applications of Deep Learning Methods on Evolution- and Contact-Based Protein Structure Prediction. International Journal of Molecular Sciences 2021, Vol. 22, Page 6032, 22(11), 6032. https://doi.org/10.3390/IJMS22116032

47. Amanatidis, D., Vaitsi, K., & Dossis, M. (2022). Deep Neural Network Applications for Bioinformatics. 7th South-East Europe Design Automation, Computer Engineering, Computer Networks and Social Media Conference, SEEDA-CECNSM 2022. https://doi.org/10.1109/SEEDA-CECNSM57760.2022.9932895 

48. Sabrina Azmi, N., Samah, A. A., Abdul Majid, H., Ali Shah, Z., Hashim, H., Syaza Azman, N., Bahru, J., & Ezzeddin Kamil Mohamed Hashim, M. (2022). Classifying Sarcoma Cancer Using Deep Neural Networks Based on Multi-Omics Data. International Journal of Innovative Computing, 12(1), 73–80. https://doi.org/10.11113/IJIC.V12N1.360

49. AlQuraishi, M., & Sorger, P. K. (2021). Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nature Methods 2021 18:10, 18(10), 1169–1180. https://doi.org/10.1038/s41592-021-01283-4

50. Eraslan, G., Avsec, Ž., Gagneur, J., & Theis, F. J. (2019). Deep learning: new computational modelling techniques for genomics. Nature Reviews Genetics 2019 20:7, 20(7), 389–403. https://doi.org/10.1038/s41576-019-0122-6

51. Dsouza, K. B., Maslova, A., Al-Jibury, E., Merkenschlager, M., Bhargava, V. K., & Libbrecht, M. W. (2022). Learning representations of chromatin contacts using a recurrent neural network identifies genomic drivers of conformation. Nature Communications 2022 13:1, 13(1), 1–19. https://doi.org/10.1038/s41467-022-31337-w

52. Ghanbari, M., & Ohler, U. (2020). Deep neural networks for interpreting RNA-binding protein target preferences. Genome Research, 30(2), 214–226. https://doi.org/10.1101/GR.247494.118

53. Lin, E., Lin, C. H., & Lane, H. Y. (2021). Machine Learning and Deep Learning for the Pharmacogenomics of Antidepressant Treatments. Clinical Psychopharmacology and Neuroscience, 19(4), 557. https://doi.org/10.9758/CPN.2021.19.4.577

54. Vaz, J. M., & Balaji, S. (2021). Convolutional neural networks (CNNs): concepts and applications in pharmacogenomics. Molecular Diversity 2021 25:3, 25(3), 1569–1584. https://doi.org/10.1007/S11030-021-10225-3

55. Pan, X., Lin, X., Cao, D., Zeng, X., Yu, P. S., He, L., Nussinov, R., & Cheng, F. (2022). Deep learning for drug repurposing: Methods, databases, and applications. Wiley Interdisciplinary Reviews: Computational Molecular Science, 12(4), e1597. https://doi.org/10.1002/WCMS.1597

56. Akpokiro, V., Martin, T., & Oluwadare, O. (2022). EnsembleSplice: ensemble deep learning model for splice site prediction. BMC Bioinformatics, 23(1), 1–22. https://doi.org/10.1186/S12859-022-04971-W/FIGURES/5

57. Afshar, S., Braun, P. R., Han, S., & Lin, Y. (2023). A multimodal deep learning model to infer cell-type-specific functional gene networks. BMC Bioinformatics 2023 24:1, 24(1), 1–12. https://doi.org/10.1186/S12859-023-05146-X

58. Li, Z., Gao, E., Zhou, J., Han, W., Xu, X., & Gao, X. (2023). Applications of deep learning in understanding gene regulation. Cell Reports Methods, 3(1), 100384. https://doi.org/10.1016/J.CRMETH.2022.100384

59. Yuan, Y., & Bar-Joseph, Z. (2019). Deep learning for inferring gene relationships from single-cell expression data. Proceedings of the National Academy of Sciences of the United States of America, 116(52), 27151–27158. https://doi.org/10.1073/PNAS.1911536116/SUPPL_FILE/PNAS.1911536116.SD02.XLSX

60. Schultze, J. L., Büttner, M., & Becker, M. (2022). Swarm immunology: harnessing blockchain technology and artificial intelligence in human immunology. Nature Reviews Immunology 2022 22:7, 22(7), 401–403. https://doi.org/10.1038/s41577-022-00740-1

61. Medina, S., Ihrie, R. A., & Irish, J. M. (2022). Learning cell identity in immunology, neuroscience, and cancer. Seminars in Immunopathology 2022 45:1, 45(1), 3–16. https://doi.org/10.1007/S00281-022-00976-Y

62. Kumaran, M., Subramanian, U., & Devarajan, B. (2019). Performance assessment of variant calling pipelines using human whole exome sequencing and simulated data. BMC Bioinformatics, 20(1), 1–11. https://doi.org/10.1186/S12859-019-2928-9/FIGURES/8

63. Hentges, L. D., Sergeant, M. J., Cole, C. B., Downes, D. J., Hughes, J. R., & Taylor, S. (2022). LanceOtron: a deep learning peak caller for genome sequencing experiments. Bioinformatics, 38(18), 4255–4263. https://doi.org/10.1093/BIOINFORMATICS/BTAC525

64. Chen, X., Wang, X., Zhang, K., Fung, K. M., Thai, T. C., Moore, K., Mannel, R. S., Liu, H., Zheng, B., & Qiu, Y. (2022). Recent advances and clinical applications of deep learning in medical image analysis. Medical Image Analysis, 79, 102444. https://doi.org/10.1016/J.MEDIA.2022.102444

65. Tasaki, S., Gaiteri, C., Mostafavi, S., & Wang, Y. (2020). Deep learning decodes the principles of differential gene expression. Nature Machine Intelligence 2020 2:7, 2(7), 376–386. https://doi.org/10.1038/s42256-020-0201-6

66. Singh, R., Lanchantin, J., Robins, G., & Qi, Y. (2016). DeepChrome: deep-learning for predicting gene expression from histone modifications. Bioinformatics, 32(17), i639–i648. https://doi.org/10.1093/BIOINFORMATICS/BTW427

67. Yin, Q., Wu, M., Liu, Q., Lv, H., & Jiang, R. (2019). DeepHistone: A deep learning approach to predicting histone modifications. BMC Genomics, 20(2), 11–23. https://doi.org/10.1186/S12864-019-5489-4/FIGURES/6 

68. Li, W., Wong, W. H., & Jiang, R. (2019). DeepTACT: predicting 3D chromatin contacts via bootstrapping deep learning. Nucleic Acids Research, 47(10), e60–e60. https://doi.org/10.1093/NAR/GKZ167

69. Boudellioua, I., Kulmanov, M., Schofield, P. N., Gkoutos, G. V., & Hoehndorf, R. (2019). DeepPVP: Phenotype-based prioritization of causative variants using deep learning. BMC Bioinformatics, 20(1), 1–8. https://doi.org/10.1186/S12859-019-2633-8/TABLES/1

70. Kuenzi, B. M., Park, J., Fong, S. H., Sanchez, K. S., Lee, J., Kreisberg, J. F., Ma, J., & Ideker, T. (2020). Predicting Drug Response and Synergy Using a Deep Learning Model of Human Cancer Cells. Cancer Cell, 38(5), 672-684.e6. https://doi.org/10.1016/J.CCELL.2020.09.014

71. Rani, P., Dutta, K., & Kumar, V. (2022). Artificial intelligence techniques for prediction of drug synergy in malignant diseases: Past, present, and future. Computers in Biology and Medicine, 144, 105334. https://doi.org/10.1016/J.COMPBIOMED.2022.105334

72. Liu, H., Zhao, Y., Zhang, L., & Chen, X. (2018). Anti-cancer Drug Response Prediction Using Neighbor-Based Collaborative Filtering with Global Effect Removal. Molecular Therapy – Nucleic Acids, 13, 303–311. https://doi.org/10.1016/J.OMTN.2018.09.011

73. Sahraeian, S. M. E., Liu, R., Lau, B., Podesta, K., Mohiyuddin, M., & Lam, H. Y. K. (2019). Deep convolutional neural networks for accurate somatic mutation detection. Nature Communications 2019 10:1, 10(1), 1–10. https://doi.org/10.1038/s41467-019-09027-x

74. Liang, Q., Bible, P. W., Liu, Y., Zou, B., & Wei, L. (2020). DeepMicrobes: taxonomic classification for metagenomics with deep learning. NAR Genomics and Bioinformatics, 2(1). https://doi.org/10.1093/NARGAB/LQAA009

75. Friedman, S., Gauthier, L., Farjoun, Y., & Banks, E. (2020). Lean and deep models for more accurate filtering of SNP and INDEL variant calls. Bioinformatics, 36(7), 2060–2067. https://doi.org/10.1093/BIOINFORMATICS/BTZ901

76. Luo, R., Wong, C. L., Wong, Y. S., Tang, C. I., Liu, C. M., Leung, C. M., & Lam, T. W. (2020). Exploring the limit of using a deep neural network on pileup data for germline variant calling. Nature Machine Intelligence 2020 2:4, 2(4), 220–227. https://doi.org/10.1038/s42256-020-0167-4

77. Auslander, N., Gussow, A. B., & Koonin, E. V. (2021). Incorporating Machine Learning into Established Bioinformatics Frameworks. International Journal of Molecular Sciences 2021, Vol. 22, Page 2903, 22(6), 2903. https://doi.org/10.3390/IJMS22062903

78. Jagadeesh, A. V., & Gardner, J. L. (2022). Texture-like representation of objects in human visual cortex. Proceedings of the National Academy of Sciences of the United States of America, 119(17), e2115302119. https://doi.org/10.1073/PNAS.2115302119/SUPPL_FILE/PNAS.2115302119.SAPP.PDF

79. Rust, N. C., & Jannuzi, B. G. L. (2022). Identifying Objects and Remembering Images: Insights From Deep Neural Networks. Https://Doi.Org/10.1177/09637214221083663, 31(4), 316–323. https://doi.org/10.1177/09637214221083663

80. Moinian, S., Vegh, V., & Reutens, D. (2023). Towards automated in vivo parcellation of the human cerebral cortex using supervised classification of magnetic resonance fingerprinting residuals. Cerebral Cortex, 33(5), 1550–1565. https://doi.org/10.1093/CERCOR/BHAC155

81. Wang, H., Huang, L., Du, C., Li, D., Wang, B., & He, H. (2021). Neural Encoding for Human Visual Cortex with Deep Neural Networks Learning “What” and “Where.” IEEE Transactions on Cognitive and Developmental Systems, 13(4), 827–840. https://doi.org/10.1109/TCDS.2020.3007761

82. Domínguez-García, C. M., Serrano-Juárez, C. A., Rodríguez-Camacho, M., Moreno-Villagómez, J., Araujo Solís, M. A., & Prieto-Corona, B. (2022). Neuropsychological intervention in attention and visuospatial skills in two patients with Williams syndrome with different types of genetic deletion. Https://Doi.Org/10.1080/21622965.2022.2063723. https://doi.org/10.1080/21622965.2022.2063723

83. Chen, Y., Zhang, F., Zhang, C., Xue, T., Zekelman, L. R., He, J., Song, Y., Makris, N., Rathi, Y., Golby, A. J., Cai, W., & O’Donnell, L. J. (2022). White Matter Tracts are Point Clouds: Neuropsychological Score Prediction and Critical Region Localization via Geometric Deep Learning. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 13431 LNCS, 174–184. https://doi.org/10.1007/978-3-031-16431-6_17/COVER

84. Moetesum, M., Diaz, M., Masroor, U., Siddiqi, I., & Vessio, G. (2022). A survey of visual and procedural handwriting analysis for neuropsychological assessment. Neural Computing and Applications, 34(12), 9561–9578. https://doi.org/10.1007/S00521-022-07185-6/TABLES/4

85. Wen, J., Li, G., Chen, J., Sun, Q., Liu, W., Guan, W., Lai, B., Szatkiewicz, J. P., He, X., Sullivan, P. F., & Li, Y. (2022). DeepGWAS: Enhance GWAS Signals for Neuropsychiatric Disorders via Deep Neural Network. BioRxiv, 2022.12.20.521277. https://doi.org/10.1101/2022.12.20.521277

86. Lanjewar, M. G., Parab, J. S., & Shaikh, A. Y. (2022). Development of framework by combining CNN with KNN to detect Alzheimer’s disease using MRI images. Multimedia Tools and Applications, 82(8), 12699–12717. https://doi.org/10.1007/S11042-022-13935-4/METRICS

87. Ren, Y., Han, J., Chen, C., Xu, Y., & Bao, T. (2022). Computer Vision and Pattern Recognition Technology on Account of Deep Neural Network. 162–169. https://doi.org/10.1007/978-3-031-24367-7_16/COVER

88. Liu, S., Huang, S., Wang, S., Muhammad, K., Bellavista, P., & Ser, J. Del. (2023). Visual Tracking in Complex Scenes: A Location Fusion Mechanism Based on the Combination of Multiple Visual Cognition Flows. Information Fusion. https://doi.org/10.1016/J.INFFUS.2023.02.00589.

89. Sarker, I. H. (2021). Deep Learning: A Comprehensive Overview on Techniques, Taxonomy, Applications and Research Directions. SN Computer Science, 2(6), 1–20. https://doi.org/10.1007/S42979-021-00815-1/FIGURES/6 

error: Bioinfocodes 2021 All Rights Reserved - Mehmet Çalıseki
Share This

Share

Share this post for the scientific community