Simple networks have worked wonderfully in classifying small datasets as MNIST. Multiple layers give the network an attractive advantage in learning to solve complex problems. Adding depth to the CNN in different forms can improve performance. However, the problem of finishing gradientgradients was overcome by increasing the depth, which was resolved by ResNet [21] by adding Skip connections. Adding these Skip connections reduced the number of parameters compared to traditional CNN. Another advantage of having these skipSkip connections is that it allows thea better gradient over the network. Although, CNN is good at detecting the properties of image recognition, image classifications, objects and many other CV related applications, it is less effective, where spatial relationships are found between features such as perception, shape, color and orientation using multiple layers and neurons to capture feature variations. On the other hand, the capsule network shares a single capsule across the network in order to identify the different variants. It is the main difference between the use of invariance and equivariance. The pooling unit focuses only on the existence, and ignores the situation to reduce the number of parameters. In order to overcome the drawback of the neglected position of the entity, Sabor and. Al [6] proposed a capsule network that stores unit information at a vector level instead of a scalar level, using dynamic routing (or routing-by-agreement). While this is a powerful capsule network, there is always room for improvement when it comes to neural networks.

The text above was approved for publishing by the original author.

Previous       Next

נסו בחינם

אנא הכנס את הודעתך
אנא בחר באיזו שפה לבצע את התיקון

בדקו את תוסף ההגהה לגוגל דוקס!

eAngel.me

eAngel.me is a human proofreading service that enables you to correct your texts by live professionals in minutes.