Design and Simulation of Array Cells of Mixed Sensor Processors for Intensity Transformation and Analog-Digital Coding in Machine Vision

  • Vladimir G. Krasilenko
  • Alexander A. Lazarev
  • Diana V. Nikitovich


The urgent need to create video sensors and processors for parallel (simultaneous by pixel) image processing with advanced functionality and multichannel picture outputs is shown in the chapter. We consider perspective spheres and areas of application of such sensor processors, in particular, for hardware high-performance architectures of neural networks, convolutional neural structures, parallel matrix-matrix multipliers, and special processor systems. We show and analyze the theoretical foundations, the mathematical apparatus of the matrix and continuous logic, and their basic operations, show their functional completeness, and evaluate their advantages and prospects for application in the design of biologically inspired devices and systems for processing and analysis of array signals. We show that some functions of continuous logic, including operations of normalized equivalence of vector and matrix signals, the operation of a limited difference in continuous logic, are a powerful basis for designing improved smart micro-cells for analog transformations and analog-digital encodings. In the next sections of the chapter, we consider in more detail the design and modeling aspects of such micro-basic cells and continuously logical high-speed ADCs based on them. The picture-type ADC consists of an array of parallel operating channels, each of which is a basic microcell or a set of them. The basic microcell of a 2D ADC structure consists of several digital-analog cells (DC), which is made on 15–35 CMOS transistors. For an iterative type ADC, only one DC cell is needed, which is DC-(G), and it additionally contains a sample and hold device (SHD). In this case, the entire base microcell can be performed on just 35 CMOS transistors. A single ADC channel cell with iteration has a serial-parallel output code. For a non-iterative-type ADC, its base microcell may consist of such a quantity of DC, which depends on the digit capacity of the code. To simulate the proposed schemes, we used OrCAD, and the results are presented below. Conversion time with 6–8-bit binary codes or Gray codes and an input photocurrent range of 0.1–24 μA is 20–30 ns at a supply voltage of 1.8–3.3 V. If the maximum input current is 4 μA, then for ADC with iteration, total power consumption was only 50–100 μW. Low power consumption at such a supply voltage and good dynamic characteristics (the digitization frequency even for 1.5 μm CMOS technologies is 40–50 MHz) shows good prospects since the structure of the linear array of ADCs and its microcells is very simple. The conversion frequency can be increased ten times with more advanced CMOS transistors. Thus, the proposed ADC based on CL BC and CM are promising for creating photoelectric structures with matrix operands, digital optoelectronic processors, linear and matrix image processors (IP), and other neural-like structures that are necessary for neural networks and neuro fuzzy controllers. In the chapter, we consider a generalized method of designing devices for nonlinear transformation of the photocurrent intensity using a set of similar basic modified cells and their circuits implemented using traditional CMOS technology. To implement the required nonlinear transformation function, we use the decomposition method. The type of synthesized functions is determined by the choice of suitable parameters, which are specified as constants or as parameters with which you can choose or change the type of nonlinear transformation. In this chapter, we also show the need for different types of nonlinear intensity conversion of photocurrents and different codes (gray, binary) for AD conversion in such parallel sensor devices and systems, especially for implementing various types of activation functions in hardware implementations of neural networks, consider the use of such parallel matrix arrays to create progressive IP and neural networks (NN). The cells offered by us have a low supply voltage of 1.8–3.3 V, low power consumption (microwatts), the conversion time is less than 1 μs, and consist of several dozen transistors. We also consider the cells for the implementation of various neuron activation functions in neural networks and transient nonlinear conversion with characteristics of S-, N-, and λ-types. In conclusion, we make estimates and show the prospects for such approaches to the design of sensor processors.


Continuous logical ADC Image processor Multichannel sensor systems Current mirror Equivalence (nonequivalence) functions Methods of selection and rank preprocessing Self-learning equivalent convolutional neural structures Equivalent models Continuous logical operations 2D spatial function Neuron equivalentor Image intensity transformation Nonlinear processing 



Auto-associative memory


Analog-digital basic cell


Analog–to-digital converter


Associative memory


Basic cell


Binary image algebra


Current-controlled current amplifiers on current mirror multipliers


Complementary double NE


Continuous logic


Continuous logic cell


Continuous logical equivalence model


Continuous logic function


Current mirror


Current multiplier mirror


Complementary metal-oxide-semiconductor


Convolutional neural network


Digital-to-analog converter


Digital-analog cell


Digital optoelectronic processor


Equivalence model


Equivalent continuous-logical


Field-programmable analog array




Hetero-associative memory


Image processor


Multi-port AAM


Multi-port hetero-associative memory


Multi-input and multi-output


Array of microlenses


Neural element


Normalized equivalence


Neuron equivalentors


Neural network


Normalized nonequivalence


Normalized spatial equivalence function


Optoelectronic very large scale integration


Spatially dependent normalized equivalence function


Sample and hold device


Spatially invariant equivalence model of associative memory


Self-learning equivalent convolutional neural structure


Multichannel sensory analog-to-digital converter


Transfer characteristics


Time-pulse-coded architecture


Universal (multifunctional) logical element


Vector or matrix organization


  1. 1.
    Krasilenko, V. G., Nikolskyy, A. I., & Bozniak, Y. A. (2002). Recognition algorithms of multilevel images of multi-character identification objects based on nonlinear equivalent metrics and analysis of experimental data. Proceedings of SPIE, 4731, 154–163.CrossRefGoogle Scholar
  2. 2.
    Krasilenko, V. G., Nikolskyy, A. I., & Bozniak, Y. A. (2012). Recognition algorithms of images of multi-character identification objects based in nonlinear equivalent metrics and analysis of experimental data using designed software. In Proceedings of Eleventh All-Ukrainian International Conference (pp. 107–110).Google Scholar
  3. 3.
    Krasilenko, V. G., Boznyak, Y. A., & Berozov, G. N. (2009). Modelling and comparative analysis of correlation and mutual alignment equivalent images. Science and learning process: Scientific and methodical. In Proceedings Scientific Conference of the VSEI Entrepreneurship University “Ukraine” (pp. 68–70).Google Scholar
  4. 4.
    Krasilenko, V. G., & Magas, A. T. (1997). Multiport optical associative memory based on matrix-matrix equivalentors. Proceedings of SPIE, 3055, 137–146.CrossRefGoogle Scholar
  5. 5.
    Krasilenko, V. G. (2010). Research and design of equivalence model of heteroassociative memory. The Scientific Session of MIFI-2010, 2, 83–90.Google Scholar
  6. 6.
    Krasilenko, V. G., Saletsky, F. M., Yatskovsky, V. I., & Konate, K. (1998). Continuous logic equivalence models of Hamming neural network architectures with adaptive-correlated weighting. Proceedings of SPIE, 3402, 398–408.CrossRefGoogle Scholar
  7. 7.
    Krasilenko, V. G., Nikolskyy, A. I., Yatskovskaya, R. A., & Yatskovsky, V. I. (2011). The concept models and implementations of multiport neural net associative memory for 2D patterns. Proceedings of SPIE, 8055, 80550T.CrossRefGoogle Scholar
  8. 8.
    Krasilenko, V. G., Lazarev, A., & Grabovlyak, S. (2012). Design and simulation of a multiport neural network heteroassociative memory for optical pattern recognitions. Рrосeedings of SРІЕ, 8398, 83980N-1.Google Scholar
  9. 9.
    Krasilenko, V. G., & Nikolskyy, A. I. (2001). Орtісаl раttеrn rесоgnіtіоn аlgоrіthms bаsеd оn nеurаl-lоgіс еquivаlеnt mоdеls аnd dеmоnstrаtіоn оf thеіr рrоspесts аnd роssіblе іmрlеmеntаtіоns. Рrосeedings оf SРІЕ, 4387, 247–260.Google Scholar
  10. 10.
    Krasilenko, V. G., Kolesnitsky, O. K., & Boguhvalsky, A. K. (1997). Application of non-linear correlation functions and equivalence models in advanced neuronets. Рrосeedings of SРІЕ, 3317, 211–223.Google Scholar
  11. 11.
    Krasilenko, V. G., & Nikitovich, D. V. (2014). Experimental studies of spatially invariant equivalence models of associative and hetero-associative memory 2D images. Systemy obrobky informaciji Kharkivsjkyj universytet Povitrjanykh Syl imeni Ivana Kozheduba, 4(120), 113–120.Google Scholar
  12. 12.
    Krasilenko, V. G., Lazarev, A. A., Grabovlyak, S. K., & Nikitovich, D. V. (2013). Using a multi-port architecture of neural-net associative memory based on the equivalency paradigm for parallel cluster image analysis and self-learning. Proceedings of SPIE, 8662, 86620S.CrossRefGoogle Scholar
  13. 13.
    Krasilenko, V. G., Nikolskyy, A. I., & Flavitskaya, J. A. (2010). The structures of optical neural nets based on new matrix_tensor equivalently models (MTEMs) and results of modeling. Optical Memory and Neural Networks (Information Optics), 19(1), 31–38.CrossRefGoogle Scholar
  14. 14.
    Krasilenko, V. G., Lazarev, A. A., & Nikitovich, D. V. (2014). Experimental research of methods for clustering and selecting image fragments using spatial invariant equivalent models. Proceedings of SPIE, 9286, 928650.CrossRefGoogle Scholar
  15. 15.
    Krasilenko, V. G., & Nikitovich, D. V. (2015). Researching of clustering methods for selecting and grouping similar patches using two-dimensional nonlinear space-invariant models and functions of normalized equivalence. In VII Ukrainian-Polish Scientific and Practical Conference Electronics and Information Technologies (ELIT-2015) (pp. 129–134). Lviv: Ivan Franko National University of Lviv.Google Scholar
  16. 16.
    Krasilenko, V. G., & Nikitovich, D. V. (2014). Modeling combined with self-learning clustering method of image fragments in accordance with their structural and topological features. Visnyk Khmeljnycjkogho Nacionaljnogho Universytetu, 2, 165–170.Google Scholar
  17. 17.
    Krasilenko, V. G., & Nikitovich, D. V. (2014). Sumishhenyj z samonavchannjam metod klasteryzaciji fraghmentiv zobrazhenj za jikh strukturno-topologhichnymy oznakamy ta jogho modeljuvannja. In Pytannja prykladnoji matematyky i matematychnogho modeljuvannja (pp. 167–176).Google Scholar
  18. 18.
    LeCun, Y., & Bengio, Y. (1995). Convolutional networks for images, speech, and time-series. In M. A. Arbib (Ed.), The handbook of brain theory and neural networks. Cambridge, MA: MIT Press.Google Scholar
  19. 19.
    Lecun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2324. Scholar
  20. 20.
    Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet classification with deep convolutional neural networks. In F. Pereira, C. J. C. Burges, L. Bottou, & K. Q. Weinberger (Eds.), Proceedings of the 25th International Conference on Neural Information Processing Systems (NIPS’12) (pp. 1097–1105). New York: Curran Associates Inc.Google Scholar
  21. 21.
    Shafiee, A., et al. (2016). ISAAC: A convolutional neural network accelerator with in-situ analog arithmetic in crossbars. In 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA) (pp. 14–26). Seoul: IEEE. Scholar
  22. 22.
    Zang, D., Chai, Z., Zhang, J., Zhang, D., & Cheng, J. (2015). Vehicle license plate recognition using visual attention model and deep learning. Journal of Electronic Imaging, 24(3), 033001. Scholar
  23. 23.
    Taylor, G. W., Fergus, R., LeCun, Y., & Bregler, C. (2010). Convolutional learning of spatio-temporal features. In K. Daniilidis, P. Maragos, & N. Paragios (Eds.), Proceedings of the 11th European Conference on Computer Vision: Part VI (ECCV’10) (pp. 140–153). Berlin: Springer. Google Scholar
  24. 24.
    Le, Q. V., Zou, W. Y., Yeung, S. Y., & Ng, A. Y. (2011). Learning hierarchical invariant spatio-temporal features for action recognition with independent subspace analysis. In CVPR 2011, Providence, RI (pp. 3361–3368). Scholar
  25. 25.
    Krasilenko, V. G., Lazarev, A. A., & Nikitovich, D. V. (2017). Modeling and possible implementation of self-learning equivalence-convolutional neural structures for auto-encoding-decoding and clusterization of images. Proceedings of SPIE, 10453, 104532N.Google Scholar
  26. 26.
    Krasilenko, V. G., Lazarev, A. A., & Nikitovich, D. V. (2018, 8 March). Modeling of biologically motivated self-learning equivalent-convolutional recurrent-multilayer neural structures (BLM_SL_EC_RMNS) for image fragments clustering and recognition. In Proc. SPIE 10609, MIPPR 2017: Pattern Recognition and Computer Vision, 106091D.
  27. 27.
    Fey, D. (2001). Architecture and technologies for an optoelectronic VLSI. Optic, 112(7), 274–282.Google Scholar
  28. 28.
    Yi, L., Shan, G., Liu, S., & Xie, C. (2016). High-performance processor design based on 3D on-chip cache. Microprocessors and Microsystems, 47, 486–490,. ISSN 0141-9331. Scholar
  29. 29.
    Maier-Flaig, F., Rinck, J., Stephan, M., Bocksrocker, T., Bruns, M., Kübel, C., Powell, A. K., Ozin, G. A., & Lemmer, U. (2013). Multicolor silicon light-emitting diodes (SiLEDs). Nano Letters, 13(2), 475–480. Scholar
  30. 30.
    Krasilenko, V. G., Nikolskyy, A. I., & Lazarev, A. A. (2013, January 3). Multichannel serial-parallel analog-to-digital converters based on current mirrors for multi-sensor systems. In Proc. SPIE Vol. 8550, Optical Systems Design 2012, 855022.
  31. 31.
    Mori, M., & Yatagai, T. (1997). Optical learning neural networks with two dimensional structures. In Proceedings of SPIE (Vol. 3402, pp. 226–232).Google Scholar
  32. 32.
    Krasilenko, V. G., Bogukhvalskiy, A. K., & Magas, A. T. (1996). Designing and simulation optoelectronic neural networks with help of equivalental models and multivalued logics. Proceedings of SPIE, 2824, 135–146.CrossRefGoogle Scholar
  33. 33.
    Krasilenko, V. G., Nikolskyy, A. I., & Lazarev, A. A. (2011). [Design and simulation of time-pulse coded optoelectronic neural elements and devices, optoelectronic devices and properties]. InTech . ISBN: 978-953-307-204-3.
  34. 34.
    Krasilenko, V. G., Nikolskyy, A. I, & Lazarev, A. A. (2013). [Design and modeling of optoelectronic photocurrent reconfigurable (OPR) multifunctional logic devices (MFLD) as the universal circuitry basis for advanced parallel high- performance processing, optoelectronics—Advanced materials and devices]. InTech. ISBN: 978-953-51-0922-8.
  35. 35.
    Krasilenko, V. G., Bardachenko, V. F., Nikolsky, A. I., & Lazarev, A. A. (2007). Programmed optoelectronic time-pulse coded relational processor as base element for sorting neural networks. In Proceedings of SPIE (Vol. 6576, p. 657610). Bellingham, WA: SPIE.Google Scholar
  36. 36.
    Huang, K. S., Yenkins, B., & Sawchuk, A. (1989). Image algebra representation of parallel optical binary arithmetic. Applied Optics, 28(6), 1263–1278.CrossRefGoogle Scholar
  37. 37.
    Wang, J., & Long, Y. (2017). M-ary optical computing. In Cloud computing-architecture and applications. InTech.Google Scholar
  38. 38.
    Guilfoyle, P., & McCallum, D. (1996). High-speed low-energy digital optical processors. Optical Engineering, 35(2), 436–442.CrossRefGoogle Scholar
  39. 39.
    Pituach, H. (2003). Enlight256. White paper report. Israel: Lenslet Ltd.Google Scholar
  40. 40.
    Krasilenko, V. G., Bardachenko, V. F., Nikolsky, A. I., Lazarev, A. A., & Kolesnytsky, O. K. (2005). Design of optoelectronic scalar-relation vector processors with time-pulse coding. Proceedings of SPIE, 5813, 333–341.CrossRefGoogle Scholar
  41. 41.
    Krasilenko, V. G., Nikolskyy, A. I., Lazarev, A. A., & Lazareva, M. V. (2010). Design and simulation of programmable relational optoelectronic time-pulse coded processors as base elements for sorting neural networks. Proceedings of SPIE, 7723, 77231G.CrossRefGoogle Scholar
  42. 42.
    Krasilenko, V. G., Nikolskyy, A. I., & Lazarev, A. A. (2014). Simulation of reconfigurable multifunctional continuous logic devices as advanced components of the next generation high-performance MIMO-systems for the processing and interconnection. Proceedings of SPIE, 9009, 90090R.Google Scholar
  43. 43.
    Kolesnitsky, O. K., & Krasilenko, V. G. (1992). Analog-to-digital converters with picture organization for digital optoelectronic processors. Autometric, 2, 16–29.Google Scholar
  44. 44.
    Kozshemjako, V. P., Krasilenko, V. G., & Kolesnitsky, O. K. (1993). Converters of halftone images in binary slices for digital optoelectronic processors. Proceedings of SPIE, 1806, 654–658.CrossRefGoogle Scholar
  45. 45.
    Krasilenko, V. G., Nikolskyy, A. I., Krasilenko, O. V., & Nikolska, M. A. (2011). Continuously logical complementary: Dual equivalently analog-to-digital converters for the optical systems. Proceedings of SPIE, 8001–8030.Google Scholar
  46. 46.
    Chakir, M., Akhamal, H., & Qjidaa, H. (2017). A design of a new column-parallel analog-to-digital converter flash for monolithic active pixel sensor. The Scientific World Journal, 2017. Article ID 8418042, 15 pages.
  47. 47.
    Salahuddin, N. S., Wibowo, E. P., Mutiara, A. B., & Paindavoine, M. (2011). Design of thin-film-transistor (TFT) arrays using current mirror circuits. In Livre/Conférence Journal of Engineering, Computing, Sciences & Technology, Asian Transactions (Vol. 1, pp. 55–59).Google Scholar
  48. 48.
    Musa, P., Sudiro, S. A., Wibowo, E. P., Harmanto, S., & Paindavoine, M. (2012). Design and implementation of non-linear image processing functions for CMOS image sensor. In Optoelectronic Imaging and Multimedia Technology II, Proceedings of SPIE (Vol. 8558). Retrieved from
  49. 49.
    Długosz, R., & Iniewski, K. (2007). Flexible architecture of ultra-low-power current-mode interleaved successive approximation analog-to-digital converter for wireless sensor networks. VLSI Design, 2007. Article ID 45269, 13 pages.Google Scholar
  50. 50.
    Roy, I., Biswas, S., & Patro, B. S. (2015). Low power high speed differential current comparator. International Journal of Innovative Research in Computer and Communication Engineering, 3(4), 3010–3016. CrossRefGoogle Scholar
  51. 51.
    Krasilenko, V. G., Nikolskyy, A. I., Lazarev, A. A., Krasilenko, O. V., & Krasilenko, I. A. (2013). Simulation of continuously logical ADC (CL ADC) of photocurrents as a basic cell of image processor and multichannel optical sensor systems. Proceedings of SPIE, 8774, 877414.CrossRefGoogle Scholar
  52. 52.
    Rath, A., Mandal, S. K., Das, S., & Dash, S. P. (2014). A high speed CMOS current comparator in 90 nm CMOS process technology. International Journal of Computer Applications. (0975–8887) International Conference on Microelectronics, Circuits and Systems (MICRO-2014).Google Scholar
  53. 53.
    Krasilenko, V. G., Nikolskyy, A. I., & Parashuk, A. V. (2001). Research of dynamic processes in neural networks with help of system energy equivalence functions. In Proceedings of the 8-th STC Measuring and Computer Devices in Technological Processes №8 (pp. 325–330).Google Scholar
  54. 54.
    Perju, V., & Casasent, D. (2012). Optical multichannel correlators for high-speed targets detection and localization. Proceedings of SPIE, 8398, 83980C.CrossRefGoogle Scholar
  55. 55.
    Rudenko, O. G., & Bodiansky, E. V. (2005). Artificial neural networks. Kharkov: OOO SMIT Company. 408p.Google Scholar
  56. 56.
    Krasilenko, V. G., Nikolskyy, A. I., & Pavlov, S. N. (2002). The associative 2D-memories based on matrix-tensor equivalental models. Radioelektronika Informatics Communication, 2(8), 45–54.Google Scholar
  57. 57.
    Krasilenko, V. G., Nikolskyy, A. I., Lazarev, A. A., & Lоbоdzinskа, R. F. (2009). Dеsіgn оf neurophysiologically motivated structures оf tіme-рulsе соdеd nеurоns. Рrосeedings of SРІЕ, 7343.Google Scholar
  58. 58.
    Krasilenko, V. G., Nikolskyy, A. I., Lazarev, A. A., & Magas, T. E. (2010). Design and simulation of optoelectronic complementary dual neural elements for realizing a family of normalized vector ‘equivalence-nonequivalence’ operations. Proceedings of SPIE, 7703, 77030P.CrossRefGoogle Scholar
  59. 59.
    Krasilenko, V. G., Nikolskyy, A. I., Lazarev, A. A., & Sholohov, V. I. (2004). The concept of biologically motivated time-pulse information processing for design and construction of multifunctional devices of neural logic. Proceedings of SPIE, 5421, 183–194.CrossRefGoogle Scholar
  60. 60.
    Krasilenko, V. G., Nikolskyy, A. I., Lazarev, A. A., & Magas, T. E. (2012). Simulation results of optoelectronic photocurrent reconfigurable (OPR) universal logic devices (ULD) as the universal circuitry basis for advanced parallel high-performance processing. Proceedings of SPIE, 8559, 85590K.CrossRefGoogle Scholar
  61. 61.
    Krasilenko, V. G., Nikolskyy, A. I., Lazarev, A. A., & Mihalnichenko, N. N. (2004). Smart time-pulse coding photo-converters as basic components 2D-array logic devices for advanced neural networks and optical computers. Proceedings of SPIE, 5439.Google Scholar
  62. 62.
    Krasilenko, V. G., Nikolskyy, A. I., & Lazarev, A. A. (2015). Designing and simulation smart multifunctional continuous logic device as a basic cell of advanced high-performance sensor systems with MIMO-structure. Proceedings of SPIE, 9450, 94500N.CrossRefGoogle Scholar
  63. 63.
    Krasilenko, V. G., Ogorodnik, K. V., Nikolskyy, A. I., & Dubchak, V. N. (2011). Family of optoelectronic photocurrent reconfigurable universal (or multifunctional) logical elements (OPR ULE) on the basis of continuous logic operations (CLO) and current mirrors (CM). Proceedings of SPIE, 8001, 80012Q.CrossRefGoogle Scholar
  64. 64.
    Krasilenko, V. G., Nikolskyy, A. I., Lazarev, A. A., & Pavlov, S. N. (2005). Design and applications of a family of optoelectronic photocurrent logical elements on the basis of current mirrors and comparators. Proceedings of SPIE, 5948, 59481G.CrossRefGoogle Scholar
  65. 65.
    Krasilenko, V. G., Lazarev, A. A., & Nikitovich, D. V. (2018). Design and simulation of optoelectronic neuron equivalentors as hardware accelerators of self-learning equivalent convolutional neural structures (SLECNS). Proceedings of SPIE, 10689, 106890C.Google Scholar
  66. 66.
    Rodríguez-Quiñonez, J. C., Sergiyenko, O., Hernandez-Balbuena, D., Rivas-Lopez, M., Flores-Fuentes, W., & Basaca-Preciado, L. C. (2014). Improve 3D laser scanner measurements accuracy using a FFBP neural network with Widrow-Hoff weight/bias learning function. Opto-Electronics Review, 22(4), 224–235.CrossRefGoogle Scholar
  67. 67.
    Flores-Fuentes, W., Sergiyenko, O., Gonzalez-Navarro, F. F., Rivas-López, M., Rodríguez-Quiñonez, J. C., Hernández-Balbuena, D., et al. (2016). Multivariate outlier mining and regression feedback for 3D measurement improvement in opto-mechanical system. Optical and Quantum Electronics, 48(8), 403.CrossRefGoogle Scholar
  68. 68.
    Flores-Fuentes, W., Rodriguez-Quinonez, J. C., Hernandez-Balbuena, D., Rivas-Lopez, M., Sergiyenko, O., Gonzalez-Navarro, F. F., & Rivera-Castillo, J. (2014, June). Machine vision supported by artificial intelligence. In Industrial Electronics (ISIE), 2014 IEEE 23rd International Symposium on (pp. 1949–1954). IEEE.Google Scholar
  69. 69.
    Schlottmann, C. R., & Hasler, P. E. (2011). A highly dense, low power, programmable analog vector-matrix multiplier: The FPAA implementation. IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 1(3), 403–411. Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Vladimir G. Krasilenko
    • 1
  • Alexander A. Lazarev
    • 1
  • Diana V. Nikitovich
    • 1
  1. 1.Vinnytsia National Technical UniversityVinnytsiaUkraine

Personalised recommendations