Skip to main content
Log in

The Multilayer Random Neural Network

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

We propose in this paper an extended model of the random neural networks, whose architecture is multi-feedback. In this case, we suppose different layers where the neurons have communication with the neurons of the neighbor layers. We present its learning algorithm and its possible utilizations; specifically, we test its use in an encryption mechanism where each layer is responsible of a part of the encryption or decryption process. The multilayer random neural network is a stochastic neural model, in this way the entire proposed encryption model has that feature.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Abdelbaki H, Gelenbe E, Kocak T (1999) Matched neural filters for EMI based mine detection. In: Proceedings of international joint conference on neural networks. IJCNN, Atlanta, p 3236–3240

  2. Aguilar J (2004) A color pattern recognition problem based on the multiple classes random neural network model. Neurocomputing 61: 71–83

    Article  Google Scholar 

  3. Aguilar J (2001) Learning algorithm and retrieval process for the multiple classes random neural network model. Neural Process Lett 13: 81–91

    Article  MATH  Google Scholar 

  4. Aguilar J (1998) Definition of an energy function for the random neural to solve optimization problems. Neural Netw 11: 731–738

    Article  Google Scholar 

  5. Aguilar A, Colmenares J (1998) Resolution of pattern recognition problems using a hybrid genetic/random neural network learning algorithm. Pattern Anal Appl 1: 52–61

    Article  MATH  Google Scholar 

  6. Aguilar J (2011) Details of the learning procedure for the multilayers random neural network model. Technical report 08-11. CEMISID, Universidad de los Andes, Merida

  7. Atalay V, Gelenbe E, Yalabik N (1992) The random neural network model for texture generation. Int J Pattern Recognit Artif Intell 6: 131–141

    Article  Google Scholar 

  8. Bakircioglu H, Gelenbe E, Kocak T (1997) Image enhancement and fusion with the random neural network. Turk J Electr Eng Comput Sci 5: 65–77

    Google Scholar 

  9. Bakircioglu H, Gelenbe E (1998) Random neural network recognition of shaped objects in strong clutter. In: Proceedings of 3rd conference on applications of artificial neural networks in image processing. Springer, New York, p 22–28

  10. Bakircioglu H, Kocak T (2000) Survey of random neural network applications. Eur J Oper Res 126: 319–330

    Article  MathSciNet  MATH  Google Scholar 

  11. Basterrech S, Mohammed S, Rubino G, Soliman M (2009) Levenberg–Marquardt training algorithms for random neural networks. Comput J 19: 324–337

    Google Scholar 

  12. Cramer C, Gelenbe E, Gelenbe P (1998) Image and video compression. IEEE Potential 17: 29–33

    Article  Google Scholar 

  13. Fourneau M, Gelenbe E, Suros R (1996) G-networks with multiple classes of negative and positive customers. Theor Comput Sci 155: 141–156

    Article  MathSciNet  MATH  Google Scholar 

  14. Gelenbe E (1989) Random neural networks with positive and negative signals and product form solution. Neural Netw 1(4): 502–510

    Google Scholar 

  15. Gelenbe E (1991) Theory of the random neural network model. Neural networks: advances and applications. Elsevier, North Holland

    Google Scholar 

  16. Gelenbe E (1993) Learning in the recurrent neural network. Neural Comput 5: 154–164

    Article  Google Scholar 

  17. Gelenbe E, Feng T, Krishnan K (1996) Neural network methods for volumetric magnetic resonance imaging of the human brain. Proc IEEE 84: 1488–1496

    Article  Google Scholar 

  18. Gelenbe E, Kocak T (2004) Wafer surface reconstruction from top-down scanning electron microscope images. Microelectron Eng 75: 216–233

    Article  Google Scholar 

  19. Gelenbe E, Timotheou S (2008) Synchronized intercations in spiked neuronal models. Comput J 51: 723–730

    Article  Google Scholar 

  20. Gelenbe E, Timotheou S (2008) Random neural networks with synchronized interactions. Neural Comput 20: 2308–2324

    Article  MathSciNet  MATH  Google Scholar 

  21. Georgiopoulos M, Li C, Kocak T (2011) Learning in the feed-forward random neural network: a critical review. Perform Eval 68: 361–384

    Article  Google Scholar 

  22. Halici U, Karaoz E (1996) A linear approximation for the Gelenbe’s learning algorithm for recurrent random neural networks. In: Proceedings of the 6th Turkish symposium on artificial intelligence and neural networks. Springer, New York, p 86–94

  23. Hubert C (1993) Pattern completion with the random neural network using the RPROP learning algorithm. In: Proceedings of international conference on man and cybernetics, vol 2. ICMC, LeTouquet, p 613–617

  24. Hussain K, Moussa G (2005) Laser intensity vehicle classification system based on random neural network. In: Proceedings of 43rd annual southeast regional conference. ACM, New York, p 31–35

  25. Jacobs R (1988) Increased rates of convergence through learning rate adaptation. Neural Netw 1: 151–160

    Article  Google Scholar 

  26. Jordi X (2002) Análisis de redes y sistemas de comunicaciones. Ediciones UPC, Barcelona

    Google Scholar 

  27. Kanter I. (2002) Secure exchange of information by sychronization of neural networks. Europhys Lett 57(1): 141–147

    Article  Google Scholar 

  28. Kiyoharu Y (2004) Advances in multimedia information processing PCM 2004. Springer, Tokyo

    Google Scholar 

  29. Likas A, Stafylopatis A (2000) Training the random neural network using quasi-newton methods. Eur J Oper Res 126: 331–339

    Article  MathSciNet  MATH  Google Scholar 

  30. NCCS.gov. (2011) National center computional sciences. http://www.nccs.gov/computing-resources/jaguar/. Accessed 3 June 2011

  31. Stallings W (2004) Fundamentos de Seguridad en Redes, Aplicaciones y Estandares. Pearson, Madrid

    Google Scholar 

  32. Stork R, Price K (1997) Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces. J Glob Optim 11: 341–359

    Article  Google Scholar 

  33. Tanenbaum A (2003) Redes de computadoras. Pearson Educacion, Mexico

    Google Scholar 

  34. Timotheou S (2010) The random neural network: a survey. Comput J 53: 251–267

    Article  Google Scholar 

  35. Triguero J, Guerrero M, Crespo E (2006) Introducción a la criptografía: historia y actualidad. Ediciones de la Universidad de Castilla-La Mancha, Cuenca

    Google Scholar 

  36. Wen Yu J (2006) Cryptography based on delayed chaotic neural networks. Phys Letts 356: 333–338

    Article  Google Scholar 

  37. Zheng Q, Fan Y, Shi Z, Wang Y (2006) Adaptive inertia weight particle swarm optimization. In: Lecture notes in computer science, vol 4029. Springer, Berlin, pp 450–459

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jose Aguilar.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Aguilar, J., Molina, C. The Multilayer Random Neural Network. Neural Process Lett 37, 111–133 (2013). https://doi.org/10.1007/s11063-012-9237-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-012-9237-x

Keywords

Navigation