Encyclopedia of Big Data Technologies

2019 Edition
| Editors: Sherif Sakr, Albert Y. Zomaya

Emerging Hardware Technologies

  • Xuntao ChengEmail author
  • Cheng Liu
  • Bingsheng He
Reference work entry
DOI: https://doi.org/10.1007/978-3-319-77525-8_170

Definitions

This chapter introduces emerging hardware technologies such as GPUs, x86-based many-core processors, and die-stacked DRAMs, which have significant impacts on big data applications.

Overview

This chapter introduces emerging hardware technologies and their impacts on big data applications. For processors, this chapter includes GPUs, Intel Xeon Phi many-core processors, FPGAs, and specialized processors. This chapter also covers many new memory technologies such as die-stacked DRAMs, storage class memory, etc. At last, we introduce network interconnects.

Outline

We introduce emerging hardware including processors, memory, hard disk, and network. On the processors, we focus on many-core processors (GPUs and Intel Xeon Phi), FPGA, and specialized hardware.

Many-Core Processors

GPU

Graphics processing units (GPUs) are specialized processors initially designed to alter memory at a massive scale and fast speed to accelerate the creation of images. They have been redesigned to...

This is a preview of subscription content, log in to check access.

References

  1. Aliyun (2017) FPGA cloud server. https://cn.aliyun.com/product/ecs/fpga. Online; Accessed 11 Nov 2017
  2. Amazon (2017) Amazon EC2 F1 instances. https://aws.amazon.com/ec2/instance-types/f1/?nc1=h_ls. Online; Accessed 11 Nov 2017
  3. Anandtech (2017) Huawei shows unannounced Kirin 970 at IFA 2017: dedicated neural processing unit. https://soylentnews.org/article.pl?sid=17/09/06/1127209. Online; Accessed 11 Nov 2017
  4. Burr GW, Kurdi BN, Scott JC, Lam CH, Gopalakrishnan K, Shenoy RS (2008) Overview of candidate device technologies for storage-class memory. IBM J Res Dev 52(4.5):449–464CrossRefGoogle Scholar
  5. Burr GW, Shelby RM, Sidler S, Di Nolfo C, Jang J, Boybat I, Shenoy RS, Narayanan P, Virwani K, Giacometti EU et al (2015) Experimental demonstration and tolerancing of a large-scale neural network (1,65,000 synapses) using phase-change memory as the synaptic weight element. IEEE Trans Electron Devices 62(11):3498–3507CrossRefGoogle Scholar
  6. Google (2017) Google TPU. https://cloud.google.com/tpu/. Online; Accessed 26 Mar 2018
  7. Intel (2017a) A new FPGA architecture and leading-edge FinFET process technology promise to meet next-generation system requirements. https://www.altera. com/en_US/pdfs/literature/wp/wp-01220-hyperflex-arc hitecture-fpga-socs.pdf. Online; Accessed 11 Nov 2017
  8. Intel (2017b) Hybrid memory cubes. https://www.altera. com/solutions/technology/serial-memory/hybrid-mem ory-cubes.html. Online; Accessed 11 Nov 2017
  9. Intel (2017c) Intel 14 nm technology. https://www.intel.sg/content/www/xa/en/silicon-innovations/intel-14nm-technology.html. Online; Accessed 11 Nov 2017
  10. Intel (2017d) Intel advanced vector extensions 512. https://www.intel.sg/content/www/xa/en/architecture-and-technology/avx-512-overview.html. Online; Accessed 12 Nov 2017
  11. Intel (2017e) IntelⓇ Omni-Path Fabric 100 series. https:// www.intel.sg/content/www/xa/en/high-performance-co mputing-fabrics/omni-path-architecture-fabric-overvi ew.html. Online; Accessed 12 Nov 2017
  12. Intel (2017f) Intel Xeon Phi processors. https://www.in tel.com/content/www/us/en/products/processors/xeon- phi/xeon-phi-processors.html. Online; Accessed 12 Nov 2017
  13. Intel (2017g) Transceiver technology. https://www.altera.com/solutions/technology/transceiver/overview.html. Online; Accessed 11 Nov 2017
  14. JEDEC Solid State Technology Association (2014) Wide I/O single data rate (Wide I/O SDR), JESD229. https://www.jedec.org/system/files/docs/JESD229.pdf
  15. JEDEC Solid State Technology Association (2015) High bandwidth memory (HBM) DRAM, JESD235A. https://www.jedec.org/system/files/docs/JESD235A.pdf
  16. Jouppi NP, Young C, Patil N, Patterson D, Agrawal G, Bajwa R, Bates S, Bhatia S, Boden N, Al Borchers et al (2017) In-datacenter performance analysis of a tensor processing unit. arXiv preprint. arXiv:1704. 04760Google Scholar
  17. Lu X, Rahman MWU, Islam N, Shankar D, Panda DK (2014) Accelerating spark with RDMA for big data processing: early experiences. In: 2014 IEEE 22nd annual symposium on high-performance interconnects, pp 9–16Google Scholar
  18. Martindale J (2017) Hard drives of the future will be faster and larger thanks to new glass platters. https://www.digitaltrends.com/. Online; Accessed 26 Dec 2017
  19. NVIDIA (2014) NVIDIA NVLink TM high-speed interconnect. http://info.nvidianews.com/rs/nvidia/images/ NVIDI%20NVLin%20High-Spee%20Interconne c%20Applicatio%20Performanc%20Brief.pdf. Online; Accessed 12 Nov 2017
  20. Schmit H, Huang R (2016) Dissecting xeon+ FPGA: why the integration of cpus and FPGAs makes a power difference for the datacenter. In: Proceedings of the 2016 international symposium on low power electronics and design. ACM, pp 152–153Google Scholar
  21. Vincent AF, Larroque J, Locatelli N, Romdhane NB, Bichler O, Gamrat C, Zhao WS, Klein J-O, Galdin-Retailleau S, Querlioz D (2015) Spin-transfer torque magnetic memory as a stochastic memristive synapse for neuromorphic systems. IEEE Trans Biomed Circuits Syst 9(2):166–174CrossRefGoogle Scholar
  22. Weyerhaeuser C, Mindnich T, Faerber F, Lehner W (2008) Exploiting graphic card processor technology to accelerate data mining queries in sap netweaver bia. In: IEEE international conference on data mining workshops, 2008 (ICDMW’08). IEEE, pp 506–515Google Scholar
  23. Wong H-SP, Lee H-Y, Yu S, Chen Y-S, Wu Y, Chen P-S, Lee B, Chen FT, Tsai M-J (2012) Metal–oxide RRAM. Proc IEEE 100(6):1951–1970CrossRefGoogle Scholar
  24. Wu R, Zhang B, Hsu M (2009) Gpu-accelerated large scale analytics. IACM UCHPCGoogle Scholar
  25. Xilinx (2017a) All programmable 3D ICs. https://www.xilinx.com/products/silicon-devices/3dic.html. Online; Accessed 11 Nov 2017
  26. Xilinx (2017b) Delivering a generation ahead at 20 nm and 16 nm. https://www.xilinx.com/about/generation-ahead-16nm.html. Online; Accessed 11 Nov 2017.
  27. Xilinx (2017c) Integrated HBM and RAM. https://www.xilinx.com/products/technology/memory.html. Online; Accessed 11 Nov 2017
  28. Zhu Q, Akin B, Sumbul HE, Sadi F, Hoe JC, Pileggi L, Franchetti F (2013) A 3d-stacked logic-in-memory accelerator for application-specific data intensive computing. In: 3D systems integration conference (3DIC), 2013 IEEE international. IEEE, pp 1–7Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Nanyang Technological UniversityJurong WestSingapore
  2. 2.National University of SingaporeSingaporeSingapore