Emerging Hardware Technologies
This chapter introduces emerging hardware technologies such as GPUs, x86-based many-core processors, and die-stacked DRAMs, which have significant impacts on big data applications.
This chapter introduces emerging hardware technologies and their impacts on big data applications. For processors, this chapter includes GPUs, Intel Xeon Phi many-core processors, FPGAs, and specialized processors. This chapter also covers many new memory technologies such as die-stacked DRAMs, storage class memory, etc. At last, we introduce network interconnects.
We introduce emerging hardware including processors, memory, hard disk, and network. On the processors, we focus on many-core processors (GPUs and Intel Xeon Phi), FPGA, and specialized hardware.
Graphics processing units (GPUs) are specialized processors initially designed to alter memory at a massive scale and fast speed to accelerate the creation of images. They have been redesigned to...
- Aliyun (2017) FPGA cloud server. https://cn.aliyun.com/product/ecs/fpga. Online; Accessed 11 Nov 2017
- Amazon (2017) Amazon EC2 F1 instances. https://aws.amazon.com/ec2/instance-types/f1/?nc1=h_ls. Online; Accessed 11 Nov 2017
- Anandtech (2017) Huawei shows unannounced Kirin 970 at IFA 2017: dedicated neural processing unit. https://soylentnews.org/article.pl?sid=17/09/06/1127209. Online; Accessed 11 Nov 2017
- Burr GW, Shelby RM, Sidler S, Di Nolfo C, Jang J, Boybat I, Shenoy RS, Narayanan P, Virwani K, Giacometti EU et al (2015) Experimental demonstration and tolerancing of a large-scale neural network (1,65,000 synapses) using phase-change memory as the synaptic weight element. IEEE Trans Electron Devices 62(11):3498–3507CrossRefGoogle Scholar
- Google (2017) Google TPU. https://cloud.google.com/tpu/. Online; Accessed 26 Mar 2018
- Intel (2017a) A new FPGA architecture and leading-edge FinFET process technology promise to meet next-generation system requirements. https://www.altera. com/en_US/pdfs/literature/wp/wp-01220-hyperflex-arc hitecture-fpga-socs.pdf. Online; Accessed 11 Nov 2017
- Intel (2017b) Hybrid memory cubes. https://www.altera. com/solutions/technology/serial-memory/hybrid-mem ory-cubes.html. Online; Accessed 11 Nov 2017
- Intel (2017c) Intel 14 nm technology. https://www.intel.sg/content/www/xa/en/silicon-innovations/intel-14nm-technology.html. Online; Accessed 11 Nov 2017
- Intel (2017d) Intel advanced vector extensions 512. https://www.intel.sg/content/www/xa/en/architecture-and-technology/avx-512-overview.html. Online; Accessed 12 Nov 2017
- Intel (2017e) IntelⓇ Omni-Path Fabric 100 series. https:// www.intel.sg/content/www/xa/en/high-performance-co mputing-fabrics/omni-path-architecture-fabric-overvi ew.html. Online; Accessed 12 Nov 2017
- Intel (2017f) Intel Xeon Phi processors. https://www.in tel.com/content/www/us/en/products/processors/xeon- phi/xeon-phi-processors.html. Online; Accessed 12 Nov 2017
- Intel (2017g) Transceiver technology. https://www.altera.com/solutions/technology/transceiver/overview.html. Online; Accessed 11 Nov 2017
- JEDEC Solid State Technology Association (2014) Wide I/O single data rate (Wide I/O SDR), JESD229. https://www.jedec.org/system/files/docs/JESD229.pdf
- JEDEC Solid State Technology Association (2015) High bandwidth memory (HBM) DRAM, JESD235A. https://www.jedec.org/system/files/docs/JESD235A.pdf
- Jouppi NP, Young C, Patil N, Patterson D, Agrawal G, Bajwa R, Bates S, Bhatia S, Boden N, Al Borchers et al (2017) In-datacenter performance analysis of a tensor processing unit. arXiv preprint. arXiv:1704. 04760Google Scholar
- Lu X, Rahman MWU, Islam N, Shankar D, Panda DK (2014) Accelerating spark with RDMA for big data processing: early experiences. In: 2014 IEEE 22nd annual symposium on high-performance interconnects, pp 9–16Google Scholar
- Martindale J (2017) Hard drives of the future will be faster and larger thanks to new glass platters. https://www.digitaltrends.com/. Online; Accessed 26 Dec 2017
- NVIDIA (2014) NVIDIA NVLink TM high-speed interconnect. http://info.nvidianews.com/rs/nvidia/images/ NVIDI%20NVLin%20High-Spee%20Interconne c%20Applicatio%20Performanc%20Brief.pdf. Online; Accessed 12 Nov 2017
- Schmit H, Huang R (2016) Dissecting xeon+ FPGA: why the integration of cpus and FPGAs makes a power difference for the datacenter. In: Proceedings of the 2016 international symposium on low power electronics and design. ACM, pp 152–153Google Scholar
- Weyerhaeuser C, Mindnich T, Faerber F, Lehner W (2008) Exploiting graphic card processor technology to accelerate data mining queries in sap netweaver bia. In: IEEE international conference on data mining workshops, 2008 (ICDMW’08). IEEE, pp 506–515Google Scholar
- Wu R, Zhang B, Hsu M (2009) Gpu-accelerated large scale analytics. IACM UCHPCGoogle Scholar
- Xilinx (2017a) All programmable 3D ICs. https://www.xilinx.com/products/silicon-devices/3dic.html. Online; Accessed 11 Nov 2017
- Xilinx (2017b) Delivering a generation ahead at 20 nm and 16 nm. https://www.xilinx.com/about/generation-ahead-16nm.html. Online; Accessed 11 Nov 2017.
- Xilinx (2017c) Integrated HBM and RAM. https://www.xilinx.com/products/technology/memory.html. Online; Accessed 11 Nov 2017
- Zhu Q, Akin B, Sumbul HE, Sadi F, Hoe JC, Pileggi L, Franchetti F (2013) A 3d-stacked logic-in-memory accelerator for application-specific data intensive computing. In: 3D systems integration conference (3DIC), 2013 IEEE international. IEEE, pp 1–7Google Scholar