Abstract
Artificial Intelligence (AI) research in the past decade has led to the development of Generative AI, where AI systems create new information from almost nothing after learning from trained models. Generative AI can create original work, like an article, a code, a painting, a poem, or a song. Google Brain initially used Large Language Models (LLM) for context-aware text translation, and Google went on to develop Bidirectional Encoder Representations from Transformers (BERT) and Language Model for Dialogue Applications (LaMDA). Facebook created OPT-175B and BlenderBot, while OpenAI innovated GPT-3 for text, DALL-E2 for images, and Whisper for speech. GPT-3 was trained on around 45 terabytes of text data at an estimated cost of several million dollars. Generative models have also been developed from online communities like Midjourney and open-source ones like HuggingFace. On November 30, 2022, OpenAI launched ChatGPT, which used natural language processing (NLP) techniques and was trained on LLM. There was excitement and caution as OpenAI’s ChatGPT reached one million users in just five days, and in January 2023 reached 100 million users. Many marveled at its eloquence and the limited supervision with which it generated code and answered questions. More deployments followed; Microsoft’s OpenAI-powered Bing on February 7, 2023, followed by Google’s Bard on February 8, 2023. We describe the working of LLM and their opportunities and challenges for our modern world.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Lake BM, Ullman TD, Tenenbaum JB, Gershman SJ (2017) Building machines that learn and think like people. In: Behavioral and brain sciences, vol 40, p e253
Linzen T (2020) How can we accelerate progress towards human-like linguistic generalization? In: arXiv preprint arXiv:2005.00955
Värtinen S, Hämäläinen P, Guckelsberger C (2022) Generating role-playing game quests with GPT language models. In: IEEE transactions on games
Brown T et al (2020) Language models are few-shot learners. In: Advances in neural information processing systems, vol 33, pp 1877–1901
Kingma DP, Welling M (2013) Auto-encoding variational bayes: In arXiv preprint arXiv:1312.6114
Goodfellow I et al (2014) Generative adversarial nets. In: Ghahramani Z et al (eds) Advances in neural information processing systems 27. Curran Associates Inc., Boston, pp 2672–2680
Razavi-Far R et al (2022) An introduction to generative adversarial learning: architectures and applications. Razavi-Far R et al (eds) Generative adversarial learning: architectures and applications, Intelligent Systems Reference Library, pp 1–4. Springer, Cham
Sarmad M, Lee HJ, Kim Y (2019) RL-GAN-Net: a reinforcement learning agent controlled gan network for real-time point cloud shape completion. In: 2019 IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp 5891–5900
Farajzadeh-Zanjani M et al (2022) Generative adversarial networks: a survey on training, variants, and applications. Razavi-Far R et al (eds) Generative adversarial learning: architectures and applications, Intelligent Systems Reference Library, pp 7–29. Springer, Cham
de Rosa GH, Papa JP (2021) A survey on text generation using generative adversarial networks. Pattern Recogn 119:108098
Devlin J et al (2019) Bert: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of 2019 conference of the North American chapter of the association for computational linguistics, pp 4171–4186. Human Language Technologies
Clark K et al (2020) Electra: pre-training text encoders as discriminators rather than generators. In arXiv preprint arXiv:2003.10555
OpenAI https://openai.com/blog/introducing-openai/. Accessed 1 Feb 2023
Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I (2017) Attention is all you need. In: Advances in neural information processing systems, vol 30
Datacamp https://www.datacamp.com/blog/what-we-know-gpt4. Accessed 1 Feb 2023
Romero A https://towardsdatascience.com/gpt-4-is-coming-soon-heres-what-we-know-about-it-64db058cfd45. Accessed 3 Feb 2023
Smith S et al (2022) Using deepspeed and megatron to train megatron-turing nlg 530b, a large-scale generative language model. In arXiv preprint arXiv:2201.11990
Bisson S What does microsoft bing’s new AI assistant mean for your business? https://www.techrepublic.com/article/microsoft-new-ai-assistant/. Accessed 9 Feb 2023
Bommasani R et al (2021) On the opportunities and risks of foundation models. In arXiv preprint arXiv:2108.07258
Gonen H et al (2020) It’s not Greek to mBERT: inducing word-level translations from multilingual BERT. arXiv preprint arXiv:2010.08275
International Telecommunication Union https://www.itu.int/en/ITU-D/Statistics/Pages/stat/default.aspx. Accessed 11 Feb 2023
Forbes https://www.forbes.com/sites/siladityaray/2023/02/16/bing-chatbots-unhinged-responses-going-viral/?sh=dabeebb110c6. Accessed 11 Feb 2023
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Barreto, F., Moharkar, L., Shirodkar, M., Sarode, V., Gonsalves, S., Johns, A. (2023). Generative Artificial Intelligence: Opportunities and Challenges of Large Language Models. In: Balas, V.E., Semwal, V.B., Khandare, A. (eds) Intelligent Computing and Networking. IC-ICN 2023. Lecture Notes in Networks and Systems, vol 699. Springer, Singapore. https://doi.org/10.1007/978-981-99-3177-4_41
Download citation
DOI: https://doi.org/10.1007/978-981-99-3177-4_41
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-3176-7
Online ISBN: 978-981-99-3177-4
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)