Bayesian classification of image textures from deep convolutional neural networks.

Texture analysis is currently acknowledging a renewed interest with the advances made since 2012 (Krizhevsky, 2012) by Deep Convolutional Neural Networks (DCNN) in computer vision. In (Cimpoi, 2016), the convolutional layers of the AlexNet DCNN (Krizhevsky, 2012) pre-learned on the ImageNet database were successfully transferred to classify the textures. In (Andrarczyk, 2016), the complete learning of less complex DCNN was done on texture databases. However, the problem remains to learn or transfer DCNNs on specific texture databases where there is not enough data available. The main difficulty is to avoid over-fitting phenomena caused by the complexity of the DCNN. One of the challenges of this thesis is to fix this issue by developing a Bayesian approach to DCNNs.

To fix this issue, we propose a Bayesian approach to DCNNs that builds on our previous work on texture analysis (Richard-2016a, Richard-2016b, Richard-2018) where the image is considered as the realization of random fields and texture properties estimated from a variographic analysis. Like Andrarczyk (2016), we propose to form a DCNN with a series of convolutional layers to filter the images. However, the convolution kernels, activation functions, and pooling phases will be adapted to reduce to operations similar to a variographic analysis.

Part of the thesis will be devoted to the study of the probability distributions of the network layers under the assumption that the input images are realizations of intrinsic random fields. This study will form the basis of a Bayesian interpretation of the DCNN. Through this interpretation, we will be able to shed statistical light on the operations proposed in the literature. We will further complement the architecture with appropriate constraints and operations to reduce the effects of over-fitting.

Another part of the thesis will concern the learning of the network. We will propose a two-step strategy. First, we will learn a lower part of the DCNN (texture characteristics) from an unlimited number of examples obtained by simulation of a random field model. Then, we will transfer this lower part pre-learned to make the classification of specific databases. Thus, in the learning of the lower part, the complexity of the network is not limited by the size of the learning base. This will mainly involve determining an appropriate architecture to achieve a sufficient level of accuracy. The work of the first part will allow defining a priori constraints to limit the effects of over-learning.

We will apply this approach to molecular imaging data by Positron Emission Tomography (PET) and Mono-Photonic Emission Tomography (TEMP) to improve the management of neurological and psychiatric brain diseases (Pan 2018, Garali 2018). The aim is to develop PET / TEMP signal quantization tools that take into account information on the spatial interactions between voxels at the origin of image texture, the temporal dynamics of images and their multiparameter character (and especially multi-targets).

Bibliographical references:

(Andrarczyk, 2016) V. Andrearczyk and P. Whelan. Using filter banks in convolutional neural networks for texture classification. Pattern Recognition Letters, 84:63–69, 2016.

(Cimpoi, 2016) M. Cimpoi, S. Maji, I. Kokkinos, and A. Vedaldi. Deep filter banks for texture recognition, description, and segmentation. Int. J. Comput Vis, 118(1):65–94, 2016.

(Krizhevsky, 2012) A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.

(Richard, 2016a) F. Richard. Tests of isotropy for rough textures of trended images. Statistica Sinica, 26:1279-1304, 2016.

(Richard, 2016b) F. Richard. Some anisotropy indices for the characterization of Brownian textures and their application to breast images. Spatial Statistics, 18:147-162, 2016.

(Richard, 2018) F. Richard. Anisotropy of Hölder Gaussian random field: characterization, estimation and application to image textures. Statistics and Computing, 28(6):1155-1168, 2018.

(Pan, 2018) X. Pan, M. Adel , C. Fossati, T. Gaidon, E. Guedj Multi-level Feature Representation of FDG-PET Brain Images for Diagnosing Alzheimer's Disease. IEEE J Biomed Health Inform. 2018 Jul 18.

(Garali, 2018) I. Garali, M. Adel, S. Bourennane, E. Guedj. Histogram-Based Features Selection and Volume of Interest Ranking for Brain PET Image Classification. IEEE J Transl Eng Health Med. 2018 Mar 16;6:2100212.

Organisation: 
Job location: 
Aix-Marseille Université, Technopôle Château-Gombert,
39, rue F. Joliot Curie,
13453 Marseille
France
Contact and application information
Deadline: 
Friday, May 10, 2019
Contact name: 
Frédéric Richard
Categorisation