Experience reading like never before
Sign in to continue reading.
Discover and read thousands of books from independent authors across India
Visit the bookstore"It was a wonderful experience interacting with you and appreciate the way you have planned and executed the whole publication process within the agreed timelines.”
Subrat SaurabhAuthor of Kuch Woh PalHe is a Researcher, Editor, and Author. He is the Editor-in-Chief at Circuit Cellar Magazine, United States. He is a Writer at TechnologyAdvice, United States. Although, his Bachelor's and Master's in Computer Science and Engineering, he also attained thirty diploma courses and a hundred certificate courses. According to Publons (Web of Science Core Collections), he is one of the World's top peer-reviewers on Convolutional Neural Networks. His expertise and research interests include Convolutional Neural Networks (CNN), Artificial Neural Networks (ANN), Cloud Computing, Artificial IntelligenceRead More...
He is a Researcher, Editor, and Author. He is the Editor-in-Chief at Circuit Cellar Magazine, United States. He is a Writer at TechnologyAdvice, United States. Although, his Bachelor's and Master's in Computer Science and Engineering, he also attained thirty diploma courses and a hundred certificate courses. According to Publons (Web of Science Core Collections), he is one of the World's top peer-reviewers on Convolutional Neural Networks.
His expertise and research interests include Convolutional Neural Networks (CNN), Artificial Neural Networks (ANN), Cloud Computing, Artificial Intelligence (AI), Intelligent Transportation Systems, Information Technology (IT), System Development Life Cycle (SDLC), Computer Vision, Psychology, and Astronomy.
Read Less...Achievements
Deep Neural Networks are characterized by the weight, bias, and activation function. The activation functions decide whether a neuron should be activated or not by computing weighted sums and biases. In this book, I represent an experimental review on the eight different activation functions for the Convolutional layers in Neural Networks.
For my experiment, I selected eight activation functions for three different datasets. The activation functions ar
Deep Neural Networks are characterized by the weight, bias, and activation function. The activation functions decide whether a neuron should be activated or not by computing weighted sums and biases. In this book, I represent an experimental review on the eight different activation functions for the Convolutional layers in Neural Networks.
For my experiment, I selected eight activation functions for three different datasets. The activation functions are – Sigmoid, Softmax, tanh, Softplus, Softsign, ReLU, ELU, and SELU. I also experimented on the networks where I did not use any activation function for the convolutional layers. After analyzing the results I see that the models with three different activation functions achieved higher performance for the three datasets. The interesting thing is, the best average performance for the three datasets is achieved by using the Softmax activation function for the Convolutional layer. At present time ReLu and ELU are the most used activation functions, but I see that tanh, Softplus, and Softsign achieved better average performance for the three datasets.
In this book, I just focus only on the activation function for the Convolutional layer to test the performance of the Convolutional Neural Networks.
In this book, I perform an experimental review on twelve similar types of Convolutional Neural Network architecture but the different sizes of kernels for the filters.
For this experiment, I select twelve different sizes of the kernel for twelve Convolutional Neural Network models, the size of kernels are – (12, 12), (11, 11), (10, 10), (9, 9), (8, 8), (7, 7), (6, 6), (5, 5), (4, 4), (3, 3), (2, 2), and (1, 1).
For this experi
In this book, I perform an experimental review on twelve similar types of Convolutional Neural Network architecture but the different sizes of kernels for the filters.
For this experiment, I select twelve different sizes of the kernel for twelve Convolutional Neural Network models, the size of kernels are – (12, 12), (11, 11), (10, 10), (9, 9), (8, 8), (7, 7), (6, 6), (5, 5), (4, 4), (3, 3), (2, 2), and (1, 1).
For this experiment, I use the “Flowers Recognition” dataset. I use 77 batches (batch size = 45) per epoch and 10 epochs per experimental fold.
After analyzing the results, I found that according to the performance, kernel_size (2, 2) and (3, 3) are the best selection for the two-dimensional convolutional layer in the convolutional neural networks.
The goal of this experiment is to help the developer to understand and select the perfect size of the kernel for filter during two-dimensional image processing by using the two-dimensional Convolutional (Conv2D) layer [11] of CNNs.
Are you sure you want to close this?
You might lose all unsaved changes.
The items in your Cart will be deleted, click ok to proceed.