كتاب Fundamentals and Methods of Machine and Deep Learning - Algorithms, Tools, and Applications
منتدى هندسة الإنتاج والتصميم الميكانيكى
بسم الله الرحمن الرحيم

أهلا وسهلاً بك زائرنا الكريم
نتمنى أن تقضوا معنا أفضل الأوقات
وتسعدونا بالأراء والمساهمات
إذا كنت أحد أعضائنا يرجى تسجيل الدخول
أو وإذا كانت هذة زيارتك الأولى للمنتدى فنتشرف بإنضمامك لأسرتنا
وهذا شرح لطريقة التسجيل فى المنتدى بالفيديو :
http://www.eng2010.yoo7.com/t5785-topic
وشرح لطريقة التنزيل من المنتدى بالفيديو:
http://www.eng2010.yoo7.com/t2065-topic
إذا واجهتك مشاكل فى التسجيل أو تفعيل حسابك
وإذا نسيت بيانات الدخول للمنتدى
يرجى مراسلتنا على البريد الإلكترونى التالى :

Deabs2010@yahoo.com


-----------------------------------
-Warning-

This website uses cookies
We inform you that this site uses own, technical and third parties cookies to make sure our web page is user-friendly and to guarantee a high functionality of the webpage.
By continuing to browse this website, you declare to accept the use of cookies.
منتدى هندسة الإنتاج والتصميم الميكانيكى
بسم الله الرحمن الرحيم

أهلا وسهلاً بك زائرنا الكريم
نتمنى أن تقضوا معنا أفضل الأوقات
وتسعدونا بالأراء والمساهمات
إذا كنت أحد أعضائنا يرجى تسجيل الدخول
أو وإذا كانت هذة زيارتك الأولى للمنتدى فنتشرف بإنضمامك لأسرتنا
وهذا شرح لطريقة التسجيل فى المنتدى بالفيديو :
http://www.eng2010.yoo7.com/t5785-topic
وشرح لطريقة التنزيل من المنتدى بالفيديو:
http://www.eng2010.yoo7.com/t2065-topic
إذا واجهتك مشاكل فى التسجيل أو تفعيل حسابك
وإذا نسيت بيانات الدخول للمنتدى
يرجى مراسلتنا على البريد الإلكترونى التالى :

Deabs2010@yahoo.com


-----------------------------------
-Warning-

This website uses cookies
We inform you that this site uses own, technical and third parties cookies to make sure our web page is user-friendly and to guarantee a high functionality of the webpage.
By continuing to browse this website, you declare to accept the use of cookies.



 
الرئيسيةالبوابةأحدث الصورالتسجيلدخولحملة فيد واستفيدجروب المنتدى

شاطر
 

 كتاب Fundamentals and Methods of Machine and Deep Learning - Algorithms, Tools, and Applications

اذهب الى الأسفل 
كاتب الموضوعرسالة
Admin
مدير المنتدى
مدير المنتدى
Admin

عدد المساهمات : 18992
التقييم : 35482
تاريخ التسجيل : 01/07/2009
الدولة : مصر
العمل : مدير منتدى هندسة الإنتاج والتصميم الميكانيكى

كتاب Fundamentals and Methods of Machine and Deep Learning - Algorithms, Tools, and Applications  Empty
مُساهمةموضوع: كتاب Fundamentals and Methods of Machine and Deep Learning - Algorithms, Tools, and Applications    كتاب Fundamentals and Methods of Machine and Deep Learning - Algorithms, Tools, and Applications  Emptyالإثنين 13 فبراير 2023, 1:00 am

أخواني في الله
أحضرت لكم كتاب
Fundamentals and Methods of Machine and Deep Learning - Algorithms, Tools, and Applications
Pradeep Singh

كتاب Fundamentals and Methods of Machine and Deep Learning - Algorithms, Tools, and Applications  F_a_m_10
و المحتوى كما يلي :


Table of Contents
Cover
Title page
Copyright
Preface
1 Supervised Machine Learning: Algorithms and Applications
1.1 History
1.2 Introduction
1.3 Supervised Learning
1.4 Linear Regression (LR)
1.5 Logistic Regression
1.6 Support Vector Machine (SVM)
1.7 Decision Tree
1.8 Machine Learning Applications in Daily Life
1.9 Conclusion
References
2 Zonotic Diseases Detection Using Ensemble Machine Learning
Algorithms
2.1 Introduction
2.2 Bayes Optimal Classiϐier
2.3 Bootstrap Aggregating (Bagging)
2.4 Bayesian Model Averaging (BMA)
2.5 Bayesian Classiϐier Combination (BCC)
2.6 Bucket of Models
2.7 Stacking
2.8 Efϐiciency Analysis
2.9 Conclusion
References3 Model Evaluation
3.1 Introduction
3.2 Model Evaluation
3.3 Metric Used in Regression Model
3.4 Confusion Metrics
3.5 Correlation
3.6 Natural Language Processing (NLP)
3.7 Additional Metrics
3.8 Summary of Metric Derived from Confusion Metric
3.9 Metric Usage
3.10 Pro and Cons of Metrics
3.11 Conclusion
References
4 Analysis of M-SEIR and LSTM Models for the Prediction of
COVID-19 Using RMSLE
4.1 Introduction
4.2 Survey of Models
4.3 Methodology
4.4 Experimental Results
4.5 Conclusion
4.6 Future Work
References
5 The Signiϐicance of Feature Selection Techniques in Machine
Learning
5.1 Introduction
5.2 Signiϐicance of Pre-Processing
5.3 Machine Learning System
5.4 Feature Extraction Methods
5.5 Feature Selection
5.6 Merits and Demerits of Feature Selection5.7 Conclusion
References
6 Use of Machine Learning and Deep Learning in Healthcare—A
Review on Disease Prediction System
6.1 Introduction to Healthcare System
6.2 Causes for the Failure of the Healthcare System
6.3 Artiϐicial Intelligence and Healthcare System for Predicting
Diseases
6.4 Facts Responsible for Delay in Predicting the Defects
6.5 Pre-Treatment Analysis and Monitoring
6.6 Post-Treatment Analysis and Monitoring
6.7 Application of ML and DL
6.8 Challenges and Future of Healthcare Systems Based on ML
and DL
6.9 Conclusion
References
7 Detection of Diabetic Retinopathy Using Ensemble Learning
Techniques
7.1 Introduction
7.2 Related Work
7.3 Methodology
7.4 Proposed Models
7.5 Experimental Results and Analysis
7.6 Conclusion
References
8 Machine Learning and Deep Learning for Medical Analysis—A
Case Study on Heart Disease Data
8.1 Introduction
8.2 Related Works
8.3 Data Pre-Processing8.4 Feature Selection
8.5 ML Classiϐiers Techniques
8.6 Hyperparameter Tuning
8.7 Dataset Description
8.8 Experiments and Results
8.9 Analysis
8.10 Conclusion
References
9 A Novel Convolutional Neural Network Model to Predict Software
Defects
9.1 Introduction
9.2 Related Works
9.3 Theoretical Background
9.4 Experimental Setup
9.5 Conclusion and Future Scope
References
10 Predictive Analysis of Online Television Videos Using Machine
Learning Algorithms
10.1 Introduction
10.2 Proposed Framework
10.3 Feature Selection
10.4 Classiϐication
10.5 Online Incremental Learning
10.6 Results and Discussion
10.7 Conclusion
References
11 A Combinational Deep Learning Approach to Visually Evoked
EEG-Based Image Classiϐication
11.1 Introduction
11.2 Literature Review11.3 Methodology
11.4 Result and Discussion
11.5 Conclusion
References
12 Application of Machine Learning Algorithms With Balancing
Techniques for Credit Card Fraud Detection: A Comparative
Analysis
12.1 Introduction
12.2 Methods and Techniques
12.3 Results and Discussion
12.4 Conclusions
References
13 Crack Detection in Civil Structures Using Deep Learning
13.1 Introduction
13.2 Related Work
13.3 Infrared Thermal Imaging Detection Method
13.4 Crack Detection Using CNN
13.5 Results and Discussion
13.6 Conclusion
References
14 Measuring Urban Sprawl Using Machine Learning
14.1 Introduction
14.2 Literature Survey
14.3 Remotely Sensed Images
14.4 Feature Selection
14.5 Classiϐication Using Machine Learning Algorithms
14.6 Results
14.7 Discussion and Conclusion
Acknowledgements
References15 Application of Deep Learning Algorithms in Medical Image
Processing: A Survey
15.1 Introduction
15.2 Overview of Deep Learning Algorithms
15.3 Overview of Medical Images
15.4 Scheme of Medical Image Processing
15.5 Anatomy-Wise Medical Image Processing With Deep
Learning
15.6 Conclusion
References
16 Simulation of Self-Driving Cars Using Deep Learning
16.1 Introduction
16.2 Methodology
16.3 Hardware Platform
16.4 Related Work
16.5 Pre-Processing
16.6 Model
16.7 Experiments
16.8 Results
16.9 Conclusion
References
17 Assistive Technologies for Visual, Hearing, and Speech
Impairments: Machine Learning and Deep Learning Solutions
17.1 Introduction
17.2 Visual Impairment
17.3 Verbal and Hearing Impairment
17.4 Conclusion and Future Scope
References
18 Case Studies: Deep Learning in Remote Sensing
18.1 Introduction18.2 Need for Deep Learning in Remote Sensing
18.3 Deep Neural Networks for Interpreting Earth
Observation Data
18.4 Hybrid Architectures for Multi-Sensor Data Processing
18.5 Conclusion
References
Index
End User License Agreement
List of Illustrations
Chapter 1
Figure 1.1 Linear regression [3].
Figure 1.2 Height vs. weight graph [6].
Figure 1.3 Logistic regression [3].
Figure 1.4 SVM [11].
Figure 1.5 Decision tree.
Chapter 2
Figure 2.1 A high-level representation of Bayes optimal
classiϐier.
Figure 2.2 A high-level representation of Bootstrap
aggregating.
Figure 2.3 A high-level representation of Bayesian model
averaging (BMA).
Figure 2.4 A high-level representation of Bayesian classiϐier
combination (BCC).
Figure 2.5 A high-level representation of bucket of models.
Figure 2.6 A high-level representation of stacking.
Chapter 3Figure 3.1 ML/DL model deployment process.
Figure 3.2 Residual.
Figure 3.3 Confusion metric.
Figure 3.4 Confusion metric interpretation.
Figure 3.5 Metric derived from confusion metric.
Figure 3.6 Precision-recall trade-off.
Figure 3.7 AUC-ROC curve.
Figure 3.8 Precision-recall curve.
Figure 3.9 Confusion metric example.
Figure 3.10 Cosine similarity projection.
Figure 3.11 (a) Cosine similarity. (b) Soft cosine similarity.
Figure 3.12 Intersection and union of two sets A and B.
Figure 3.13 Confusion metric.
Chapter 4
Figure 4.1 Cases in Karnataka, India.
Figure 4.2 Cases trend in Karnataka, India.
Figure 4.3 Modiϐied SEIR.
Figure 4.4 LSTM cell.
Figure 4.5 (a) Arrangement of data set in 3D tensor. (b)
Mapping of the 3D and 2...
Figure 4.6 RMSLE value vs. number of epochs.
Figure 4.7 Cases in Karnataka.
Figure 4.8 SEIR Model ϐit for test cases.
Figure 4.9 Cases predicted for next 10 days.
Figure 4.10 Testing results.
Figure 4.11 Next 10 days Prediction using LSTM model.Figure 4.12 Prediction error curve.
Figure 4.13 Prediction error and RMSLE curve.
Chapter 5
Figure 5.1 Classiϐication of feature extraction methods.
Chapter 6
Figure 6.1 Relationship between AI, ML, and DL.
Figure 6.2 Image segmentation process ϐlow.
Figure 6.3 The visual representation of clinical data
generation to natural lang...
Chapter 7
Figure 7.1 Extraction of exudates.
Figure 7.2 Extraction of blood vessels.
Figure 7.3 Extraction of microaneurysms.
Figure 7.4 Extraction of hemorrhages.
Figure 7.5 Working of AdaBoost model.
Figure 7.6 Working of AdaNaive model.
Figure 7.7 Working of AdaSVM model.
Figure 7.8 Working of AdaForest model.
Figure 7.9 Representative retinal images of DR in their order
of increasing seve...
Figure 7.10 Comparison of classiϐiers using ROC curve (Binary
classiϐication).
Figure 7.11 Comparison of classiϐiers (Binary Classiϐication).
Figure 7.12 Comparison of classiϐiers (Multi Classiϐication).
Chapter 8
Figure 8.1 Workϐlow model of proposed system.
Figure 8.2 Architecture of proposed system.Figure 8.3 Original dataset distribution.
Figure 8.4 Resampling using SMOTE.
Figure 8.5 Target class distribution.
Figure 8.6 Resampled distribution applying SMOTE.
Figure 8.7 Feature ranking using Extra tree classiϐier.
Figure 8.8 p-values of the features.
Figure 8.9 Performance evaluation of models under study 1
with dataset size = 1,...
Figure 8.10 Performance evaluation of models under study 2
with data size = 1,00...
Figure 8.11 Performance evaluation of models under study 3
With dataset size = 1...
Figure 8.12 Correlation between follow-up time and death
event.
Figure 8.13 Performance evaluation of models on different
classiϐiers.
Figure 8.14 Performance evaluation of models on dataset size
= 508.
Figure 8.15 Performance evaluation of models on dataset size
= 1,000.
Chapter 9
Figure 9.1 File level defect prediction process.
Figure 9.2 A basic convolutional neural network (CNN)
architecture.
Figure 9.3 Overall network architecture of proposed NCNN
model.
Figure 9.4 Description regarding confusion matrix.
Figure 9.5 Confusion matrix analysis for the data sets (KC1,
KC3, PC1, and PC2).Figure 9.6 Model accuracy and model loss analysis for the data
sets (KC1, KC3, P...
Figure 9.7 Performance comparison of different models for
software defect predic...
Figure 9.8 Model accuracy analysis for the data sets (KC1, KC3,
PC1, and PC2).
Figure 9.9 Confusion rate analysis for the data sets (KC1, KC3,
PC1, and PC2).
Chapter 10
Figure 10.1 Hierarchical video representation.
Figure 10.2 Overall architecture of the proposed framework.
Figure 10.3 Blocking pattern.
Figure 10.4 Key frame extraction.
Figure 10.5 Training and testing process.
Figure 10.6 Predicted output frames from advertisement
videos.
Figure 10.7 Predicted output frames from non-advertisement
videos.
Chapter 11
Figure 11.1 Flowchart of proposed architecture.
Figure 11.2 Architecture of proposed combinational
CNN+LSTM model.
Figure 11.3 Overall XceptionNet architecture.
Figure 11.4 Proposed CNN model’s accuracy graph on (a)
MindBig dataset and (b) P...
Figure 11.5 Proposed CNN+LSTM model’s accuracy graph on
(a) MindBig dataset and ...
Chapter 12Figure 12.1 Flow diagram of the credit card fraudulent
transaction detection.
Figure 12.2 Correlation matrix for the credit card dataset
showing correlation b...
Figure 12.3 Oversampling of the fraud transactions.
Figure 12.4 Undersampling of the no-fraud transactions.
Figure 12.5 SMOTE [26].
Figure 12.6 Optimal hyperplane and maximum margin [29].
Figure 12.7 Support vector classiϐier.
Figure 12.8 Binary decision tree [31].
Figure 12.9 (a) Five-fold cross-validation technique and (b)
GridSearchCV.
Figure 12.10 (a) ROC curve [39]. (b) Precision recall curve for
no skill and log...
Figure 12.11 Outline of implementation and results.
Chapter 13
Figure 13.1 The architecture crack detection system.
Figure 13.2 (a) Thermal image. (b) Digital image. (c) Thermal
image. (d) Digital...
Figure 13.3 CNN layers in learning process.
Chapter 14
Figure 14.1 Raw images (Band 2 and Band 5, respectively).
Figure 14.2 Band combination 3-4-6 and 3-2-1, respectively.
Figure 14.3 Spectral signatures after atmospheric correction.
Figure 14.4 Pictorial representation of Euclidean and
Manhattan distances.
Figure 14.5 Discriminant functions.
Figure 14.6 Result of ML classiϐier.Figure 14.7 Result of k-NN classiϐier.
Chapter 15
Figure 15.1 Digital medical images: (a) X-ray of chest, (b) MRI
imaging of brain...
Figure 15.2 Scheme of image processing [12].
Figure 15.3 Anatomy-wise breakdown of papers in each year
(2016–2020).
Figure 15.4 Year-wise breakdown of papers (2016–2020)
based on the task.
Chapter 16
Figure 16.1 Prototype 1:16 scale car.
Figure 16.2 Image processing pipeline.
Figure 16.3 Original Image.
Figure 16.4 Canny edge output.
Figure 16.5 Hough lines overlaid on original image.
Figure 16.6 CNN model architecture.
Figure 16.7 Experimental track used for training and testing.
Figure 16.8 Accuracy vs. training time (hours) plot of Model 1
that uses classif...
Figure 16.9 Loss vs. training time (hours) plot of Model 1 that
uses classiϐicat...
Figure 16.10 MSE vs. steps plot of Model 2 that uses
classiϐication method with ...
Figure 16.11 MSE vs. steps plot of Model 8 that uses
classiϐication method with ...
Figure 16.12 Accuracy vs. steps plot of Model 5 that uses
classiϐication method ...
Figure 16.13 Loss vs. steps plot of Model 5 that uses
classiϐication method with...Figure 16.14 Input image given to CNN.
Figure 16.15 Feature map at second convolutional layer.
Figure 16.16 Feature map at the ϐifth convolutional layer.
Chapter 17
Figure 17.1 An architecture of simple obstacle detection and
avoidance framework...
Figure 17.2 A prototype of a wearable system with image to
tactile rendering fro...
Figure 17.3 DG5-V hand glove developed for Arabic sign
language recognition [40]...
Chapter 18
Figure 18.1 Land cover classiϐication using CNN.
Figure 18.2 Remote sensing image classiϐier using stacked
denoising autoencoder.
Figure 18.3 Gaussian-Bernoulli RBM for hyperspectral image
classiϐication.
Figure 18.4 GAN for pan-sharpening with multispectral and
panchromatic images.
Figure 18.5 Change detection on multi-temporal images using
RNN.
List of Tables
Chapter 3
Table 3.1 Calculation and derived value from the predicted
and actual values.
Table 3.2 Predicted probability value from model and actual
value.
Table 3.3 Predicting class value using the threshold.
Table 3.4 Document information and cosine similarity.Table 3.5 Metric derived from confusion metric.
Table 3.6 Metric usage.
Table 3.7 Metric pros and cons.
Chapter 4
Table 4.1 Model summary.
Table 4.2 Predicted data.
Chapter 7
Table 7.1 Literature survey of Diabetic Retinopathy.
Table 7.2 Retinopathy grades in the Kaggle dataset.
Table 7.3 Accuracy for binary classiϐication using machine
learning techniques.
Table 7.4 Accuracy for multiclass classiϐication using machine
learning techniqu...
Chapter 8
Table 8.1 Description of each feature in the dataset.
Table 8.2 Sample dataset.
Table 8.3 Experiments description.
Table 8.4 Accuracy scores (in %) of all classiϐiers on different
data size.
Table 8.5 Accuracy scores (in %) of all classiϐiers on different
data size.
Table 8.6 Accuracy scores (in %) of all classiϐiers on different
data size.
Table 8.7 Logit model statistical test.
Table 8.8 Chi-square test.
Chapter 9
Table 9.1 Characteristics of the NASA data sets.Table 9.2 Attribute information of the 21 features of PROMISE
repository [13].
Table 9.3 Performance comparison for the data set KC1.
Table 9.4 Performance comparison for the data set KC3.
Table 9.5 Performance comparison for the data set PC1.
Table 9.6 Performance comparison for the data set PC2.
Table 9.7 Confusion matrix analysis for the KC1, KC3, PC1, and
PC2 data sets (TP...
Chapter 10
Table 10.1 Classiϐiers vs. classiϐication accuracy.
Table 10.2 Performance metrics of the recommended
classiϐier.
Table 10.3 Confusion matrix.
Chapter 11
Table 11.1 Dataset description.
Table 11.2 Architecture of proposed convolutional neural
network.
Table 11.3 Classiϐication accuracy (%) with two proposed
models on two different...
Chapter 12
Table 12.1 Description of ULB credit card transaction dataset.
Table 12.2 Confusion matrix [7].
Table 12.3 Result summary for all the implemented models.
Table 12.4 Confusion matrix results for all the implemented
models.
Chapter 13
Table 13.1 Activation functions.
Table 13.2 Optimizers.Table 13.3 Performance: optimizer vs. activation functions.
Chapter 14
Table 14.1 General confusion matrix for two class problems.
Table 14.2 Confusion matrix for a ML classiϐier.
Table 14.3 Confusion matrix for a k-NN classiϐier.
Table 14.4 Average precision, recall, F1-score, and accuracy.
Chapter 15
Table 15.1 Summary of datasets used in the survey.
Table 15.2 Summary of papers in brain tumor classiϐication
using DL.
Table 15.3 Paper summary—cancer detection in lung nodule
by DL.
Table 15.4 Paper summary—classiϐication of breast cancer by
DL.
Table 15.5 Paper summary on heart disease prediction using
DL.
Table 15.6 COVID-19 prediction paper summary.
Chapter 16
Table 16.1 CNN architecture.
Table 16.2 Model deϐinition.
Table 16.3 Model results.
Chapter 17
Table 17.1 Comparison of sensors for obstacle detection in
ETA inspired from [16...
Table 17.2 A comparison between few wearables.
Table 17.3 Sensor based methods from literature.
Table 17.4 Vision based approaches.
Chapter 18Table 18.1 Hybrid deep architectures for remote sensing.
x
Accuracy, 211, 213, 215, 218, 222–227, 229–230, 233, 295, 335, 337
Ackerman steering, 381
Activation functions,
linear, 318
ReLu, 316
SeLu, 316
sigmoid, 318
softsign, 318
AdaBoost, 164, 188
AdaForest, 166
Adam optimizer, 106, 111
AdaNaive, 165
ADAptive SYNthetic (ADASYN)
imbalance learning technique, 182
AdaSVM, 166
Additional metrics, 86
Cohen Kappa, 87
Gini coefϐicient, 87
mean reciprocal rank (MRR), 86–87
percentage errors, 88
scale-dependent errors, 87–88
scale-free errors, 88
Age and cardiovascular disease, 179Alerting devices, 411
AlexNet, 344
Artiϐicial intelligence, 238
Artiϐicial neural network (ANN), 189, 213, 234
Assistive listening devices, 410–411
Assistive technology, 397–399
Attribute subset selection, 126
AUC, 296–297
Augmentative and alternative communication devices, 411–417
AUPRC, 296–297
Autoencoder,
stacked autoencoder, 433
stacked denoising autoencoder, 428–429
Autoencoders, 345
Backward elimination method, 126
Bagging, 188
Barrier, 138
Bayes optimal classiϐier, 19
Bayesian classiϐier combination, 24
Bayesian model averaging, 22
Bayesian network, 213, 234
BFGS optimization algorithm, 103, 108
Bladder volume prediction, 147
Block intensity comparison code (BICC), 246
Blood vessels, 161
Bootstrap aggregating, 21Bootstrap replica, 188
Brain tumor, 352–353, 356
Breast cancer segmentation and detection, 362, 364
Broadcast, 242
Broyden Fletcher Goldfarb Shanno (BFGS) function, 108
Bucket of models, 26 439
Canny edge detector, 382, 383, 394
Cardiovascular diseases, 178
CART models, 188
Catastrophic failure, 253
Change detection, 431, 433
Chi-square tests, 179, 184
CLAHE, 161
Class labels, 4
Class weight, 287
Classiϐication accuracy, 271
Classiϐier, 211–219, 222, 224, 230, 232, 234
Cloud removal, 432–433
Clustering, 128
Cognitive, 238
Computer vision and deep learning in AT, 403–409
Confusion matrix, 222, 225, 230–231, 334, 335, 337
FN, 294–295
FP, 294–295
TN, 294–295
TP, 294–295Confusion metrics, 52
accuracy, 55
AUC-PRC, 65–67
AUC-ROC, 64–65
derived metric from recall, precision and F1-score, 67–68
F1-beta score, 61–63
F1-score, 61
false negative rate (FNR), 57
false positive rate (FPR), 58
how to interpret confusion metric, 53–55
precision, 58
recall, 59–60
recall-precision trade-off, 60–61
solved example, 68–70
thresholding, 63–64
true negative rate (TNR), 57
true positive rate (TPR), 56
Continuous target values, 4
Convolutional neural network (CNN), 189, 211, 213, 216, 219, 235, 266,
316, 343–344, 379, 380, 382, 384, 385, 386, 387, 388, 392, 394, 427–
428, 432–434Correlation, 70
biserial correlation, 78
biweight mid-correlation, 75–76
distance correlation, 74–75
gamma correlation, 76–77
Kendall’s rank correlation, 73–74
partial correlation, 78
pearson correlation, 70–71
point biserial correlation, 77
spearman correlation, 71–73
COVID-19 prediction, 370
Cox regression model, 180
Crack detection, 315
Cross-validation, 190
Cross-validation technique, 293–294
CT scans, 347
Data imbalance, 181–182
Data mining, 3
Data pursuing, 140
Data reduction, 125
Data scrutiny, 140
Decision tree, 186–187, 211
Decision tree induction method, 126
Deep belief network, 429–430
Deep features, 213–214, 219
Deep learning, 261, 312Deep learning algorithms in medical image processing,
anatomy-wise medical image processing with deep learning, 349,
351–353, 356–357, 361–362, 364, 370
introduction, 342–343
overview of deep learning algorithms, 343–345
overview of medical images, 346–348
scheme of medical image processing, 348–349
Deep learning model, 109–110
Deep neural network, 211, 213, 234–235
DenseNet, 344
Destructive testing, 312
Diabetic retinopathy, 153–154
Diastolic heart failure, 179–180
Dilated cardiomyopathy, 179
Dimension reduction, 125
Dimensionality, 250
Discrete Fourier transform, 127
Discrete wavelet transform, 127
Distance-based metric, 331
Edge detection, 161
EEG (Electroencephalogram), 260
Efϐiciency, 29
Ejection fraction, 178, 179
Electronic travel aids (ETA), 400–401, 403
End-to-end learning, 379, 380, 382, 387, 392, 394
Ensemble machine learning, 18Evaluation, 334
Expert system, 145
Extra tree classiϐiers, 179, 182–183, 195–196, 199, 203
Exudates, 161
F1-score, 295, 335, 337
Facebook’s DeepFace algorithm, 13
Fault proneness, 211, 213
Feature selection, 128, 331
embedded methods, 130
ϐilter methods, 129
wrapper methods, 129
Feature selection method, 179–184, 195–196
chi-square test, 184
extra tree classiϐier, 182–183
forward stepwise selection, 183–184
Pearson correlation, 183
Feed-forward network, 189
F-measure, 211, 213–214, 222–224, 227–230, 232
Forward selection method, 126
Forward stepwise selection, 183–184
Fourier transform, 314
Fraud detection, 14–15
Gabor wavelet ϐilters, 353
Generative adversarial networks (GANs), 345, 362, 430, 432–433
GNMT, 14Google tracks, 13
Google translate, 14
GoogleNet, 344
Gradient descent, 6–7
Gridsearch, 293–294
Haptics, 399, 408, 409
Heart disease prediction, 364, 370
Heart failure, 178–180, 190, 193, 195, 196, 199, 204, 206–207
Hemorrhages, 162
Hough transform, 383, 394
Hybrid architectures, 432–434
Hyperparameter tuning, 190
Hyperspectral image classiϐication, 429–430
Image analysis, 349
Image enhancement, 349
Image fusion, 427, 433
Image visualization, 349
Incremental algorithm, 256
Infrared thermography, 313
Interpretation, 137
Ischemic cardiomyopathy, 179
Javalang, 233
Kears, 211, 219, 230
Key frame rate (KFR), 246
K-nearest neighbors (KNN), 163, 187, 334Land cover classiϐication, 426–431
Lasso regression, 180
Lexical analyzer, 233
Logistic regression model, 185
Logit model, 195, 196, 199, 203
Long short-term model (LSTM), 269
Long short-term memory (LSTM), 104–106, 108–113, 116
Loss function, 212, 221, 225
LSTM, 345
Lung nodule cancer detection, 357, 361–362
Machine learning, 211–212, 233–234, 245
Machine learning algorithms, decision tree, 290
logistic regression, 288
random forest, 292
svm, 288
Machine learning and deep learning for medical analysis,
data pre-processing, 181–182
dataset description, 190–191, 193–197
experiments and results, 197–206
feature selection, 182–184
hyperparameter tuning, 190
introduction, 178–179
ML classiϐiers techniques, 184–189
related works, 179–181
Machine learning system, 123Machine learning techniques, 280
supervised, 280
unsupervised, 280
Maximum likelihood classiϐier, 327, 328, 332
Maximum marginal hyperplane (MMH), 9
MCC, 295
Mean squared error, 389, 390
Mel-frequency cepstral coefϐicients (MFCCs), 239
Metaheuristic method, 103
Metric usage, 90–93
Metric used in regression model, 38
ACP, press and R-square predicted, 49–50
adjusted R-square, 46–47
AIC, 48–49
BIC, 49
mean absolute error (Mae), 38–39
mean square error (Mse), 39–40
root mean square error (Rmse), 41–42
root mean square logarithm error (Rmsle), 42–45
R-square, 45
problem with R-square, 46
solved examples, 51
variance, 47–48
Microaneurysms, 162
MindBig dataset, 264ML classiϐiers techniques, 184–189
ensemble machine learning model,
AdaBoost, 188
bagging, 188
random forest, 187–188
neural network architecture,
artiϐicial neural network (ANN), 189
convolutional neural network (ANN), 189
supervised machine learning models,
decision tree, 186–187
K-nearest neighbors (KNN), 187
logistic regression model, 185
Naive Bayes, 186
support vector machines (SVM), 186
MLP (multilayer perceptron), 379, 380, 384, 385, 387, 388, 390
Model evaluation, 34–35
assumptions, 36
error sum of squares (Sse), 37
regression sum of squares (SSr), 37
residual, 36
total sum of squares (Ssto), 37–38
Model selection, 124
MRI scans, 346–347M-SEIR and LSTM models for COVID-19 prediction,
experimental results, 111–113, 116
introduction, 101–102
methodology, 106–111
survey of models, 103–106
Multi-level perceptron (MLP), 189
Multimedia applications, 255
Naive Bayes, 186, 211–214, 216, 218, 221, 224–225, 227–230, 232
NASA, 211–213, 215, 218, 219, 224, 230
Natural language processing (NLP), 78
BELU score, 79
BELU score with N-gram, 80–81
cosine similarity, 81–83
Jaccard index, 83–84
MACRO, 86
N-gram, 79
NIST, 85
ROUGE, 84–85
SQUAD, 85–86
Navigational aids, 399, 403, 409
Netϐlix application, 14
Neural machine algorithm, 14
Neural network classiϐier models, 205
Neural networks, 102, 104–105, 113, 116
Neurons, 239
Nondestructive testing, 312NVIDIA, 14
Obstacle detection and avoidance, 400
Oil spill detection, 433–434
OLS, 6–7
Online video streaming (Netϐlix), 14
OpenCV, 312
Optimizer, 212, 221, 225
Optimizers,
adagrad, 320
adam, 321
gradient descent, 319
stochastic GD, 319
Ordinary least squares (OLS), 6
Outlier detection, 123
Outliers, 123
Pan-sharpening, 430, 433–434
Pearson correlation, 179, 183, 195, 196
PET scans, 347–348
Precision, 211, 213–214, 222–224, 227–230, 295, 335, 337
Prediction, 253
Predictive system, 137
Pre-treatment, 144
Principal components analysis, 127
Pro and cons of metrics, 94
Program modules, 215PROMISE, 211–212, 215, 218, 219, 224, 230, 234
Pruning, 244
Random forest, 164, 187–188, 211–214, 216, 218, 221, 224–225, 227–
230, 232
Raspberry Pi, 381, 385
Recall, 211–213, 222–224, 227–230, 295, 335, 337
Receiver operating characteristics, 213
Rectiϐied linear units, 217, 221
Recurrent neural network, 215, 344–345, 426, 431–434
Regression, 5
Regularization, 212
Reinforcement learning (RL), 4
Remote sensing, 425–434
Remotely sensed images, 327–329
ResNet, 344
Restricted Boltzmann machine, 429–430
Rheumatoid arthritis (RA), 240
RMSLE loss function, 106
RNN, 104–106
Root mean square logarithmic error (RMSLE) function, 110–111, 113
Sampling technique,
oversampling, 286
smote, 286
under-sampling, 286
Segmentation, 248, 349
SEIRPDQ model, 103–104Seizure, 148
Self-driving, 379, 380, 382, 394
Self-driving cars, 14
Semantic, 239
Sensor-based methods, 413–414
Serum creatinine concentration, 178, 179
Short-time Fourier transform (STFT), 262
Sign language recognition, 412–417
Signiϐicance of machine learning and deep learning in assistive
communication technology, 417–418
Sleep pattern, 138
Social media (Facebook), 13
Software defect prediction, 211–215, 222, 227–230, 233–234
Software modules, 211, 234
Sparse coding techniques, 132
Spatial domain technique, 349
Spatial, 246
Spectral signature, 327, 330–333
Spectrogram images, 265
Stacking, 27
Summary of metric derived from confusion metric, 89
Supervised deep neural networks,
convolutional neural network, 343–344
recurrent neural network, 344–345
transfer learning, 344Supervised machine learning (SML),
decision tree, 11–12
history, 2
linear regression (LR), 5–8
logistic regression, 8–9
machine learning applications in daily life, 12–15
supervised learning, 4–5
support vector machine (SVM), 9–10
Support vector machines (SVM), 163, 186, 212–213, 327, 332, 338, 353
Susceptible exposed infectious removed (SEIR) model, 103–104, 106–
108
Synthetic minority oversampling technique (SMOTE), 181–182, 193–
194, 198, 200
Temporal, 246
TensorFlow, 211, 219, 230
Tesla, 14
Thermal image, 315
Tracer, 347–348
Trafϐic alerts (Maps), 12
Transaction,
fraudulent, 278
legitimate, 278
Transfer learning, 322, 344
Transportation and commuting (Uber), 13
Trimming, 244
UCSD cohort, 180Unsupervised learning (UL), 4
autoencoders, 345
GANs, 345
Urbanization, 327, 329
VGGNet, 344
Video streaming, 252
Virtual personal assistants, 13–14
Vision-based methods, 414–417
Visual stimulus evoked potentials, 259
Voting technique, 164
Wavelet transforms, 127, 314
Wearables, 408–409, 411
WEKA, 218
Xception network, 267
XGBoost, 198, 206
X-ray scans, 347
Zonotic disease, 17
Zonotic microorganism, 18


كلمة سر فك الضغط : books-world.net
The Unzip Password : books-world.net
أتمنى أن تستفيدوا من محتوى الموضوع وأن ينال إعجابكم

رابط من موقع عالم الكتب لتنزيل كتاب Fundamentals and Methods of Machine and Deep Learning - Algorithms, Tools, and Applications
رابط مباشر لتنزيل كتاب Fundamentals and Methods of Machine and Deep Learning - Algorithms, Tools, and Applications
الرجوع الى أعلى الصفحة اذهب الى الأسفل
 
كتاب Fundamentals and Methods of Machine and Deep Learning - Algorithms, Tools, and Applications
الرجوع الى أعلى الصفحة 
صفحة 2 من اصل 1
 مواضيع مماثلة
-
» كتاب Fundamentals of Neural Networks - Architectures, Algorithms, and Applications
» كتاب Fundamentals of Machine Tools
» كتاب Fundamentals of Machine Tools
» كتاب Fundamentals of Metal Machining and Machine Tools
» كتاب Neural Networks and Learning Algorithms in MATLAB

صلاحيات هذا المنتدى:لاتستطيع الرد على المواضيع في هذا المنتدى
منتدى هندسة الإنتاج والتصميم الميكانيكى :: المنتديات التعليمية المتنوعة والثقافية :: منتدى التكنولوجيا والابتكارات والبحث العلمي-
انتقل الى: