Collection

Special Issue on Neuro-Symbolic Intelligence: Large Language Model Enabled Knowledge Engineering

The emerging neural systems, i.e., Large Language Models (LLMs), such as ChatGPT and GPT4 have revolutionized the realms of symbolic engineering techniques (i.e., Knowledge Engineering), owing to their unprecedented capabilities and broad adaptability. However, the inherent "black-box" nature of LLMs sometimes compromises their ability to consistently access accurate knowledge. Moreover, LLMs still face with the problems of producing misinformation, biased information or malicious content. On the other hand, Knowledge Graphs (KGs), the representative technique of knowledge engineering exemplified by platforms like Wikipedia and Wikidata, are structured knowledge models that offer rich, explicit, and accurate knowledge. KGs can enhance the capabilities of LLMs, offering access to external knowledge and bolstering interpretability. Yet, the dynamic nature of the world makes KGs challenging to construct, maintain and query, posing obstacles for methods that aim to handle new facts and represent emergent knowledge.

Neuro-symbolic methods attempt to integrate state-of-the-art neural techniques (e.g., LLMs) and symbolic methods (e.g., knowledge engineering) to provide a best-of-both-worlds situation and have gained increasing attention. In this special issue, we eagerly anticipate receiving original research papers, application studies, and resource (e.g., tools and datasets) submissions.

For further information, please see the full CFP under "Journal Updates."

Editors

Articles

Articles will be displayed here once they are published.