Gleaning information from natural language sources, including requirements documents and online social forums, is a staple of the human software engineer. It has become an important topic of automation, both since there is so much of this type of unstructured or semi-structured information available, and because there is a lot of “noise” inherent in these sources. Two papers in this issue deal with aspects of helping automate this important tool of the working software engineer. We also have two other excellent papers, one on the topic of automated verification reasoning and the other an empirical study of software reuse.

  • In “Semantic Tagging and Linking of Software Engineering Social Content” Bagheri and Ensan describe an interesting method that makes use of Wikipedia as a semantic knowledge source to organize the contents of online engineering forums and other social media.

  • Ponsini, Michel, and Rueher show how to use contraint reasoning to refine the abstractions computed by abstract interpretation techniques in their paper, “Verifying floating-point programs with constraint programming and abstract interpretation techniques”.

  • In “Assisting Requirements Analysts to Find Latent Concerns with REAssistant” Rago, Marcos, and Diaz-Pace describe their tool, REAssistant, that can automatically extract semantic information from textual natural language requirements documents and link together scattered bits that are all relevant to a given cross-cutting concern.

  • Finally, in “Assessment and cross-product prediction of SPL quality: accounting for reuse across products, over multiple releases”, Devine, Goseva-Popstajanova, Krishnan, and Lutz give us the results of an interesting study of how two types of reuse, reuse within a product line and reuse between two or more product lines, affect quality and detection of fault-proneness.

You are invited, as always, to email me at Bob.ASEJ@gmail.com with your thoughts on this issue or the Journal in general.