AI for Solution Design

  • Kristof Kloeckner
  • John Davis
  • Nicholas C. Fuller
  • Giovanni Lanfranchi
  • Stefan Pappe
  • Amit Paradkar
  • Larisa Shwartz
  • Maheswaran Surendra
  • Dorothea Wiesmann
Part of the SpringerBriefs in Computer Science book series (BRIEFSCOMPUTER)


Before we explain how AI could be leveraged to dramatically transform the Solutioning phase, it is important to outline the role of this phase across the Service Delivery lifecycle as well as the major personas involved.


Before we explain how AI could be leveraged to dramatically transform the “Solutioning” phase, it is important to outline the role of this phase across the Service Delivery lifecycle as well as the major personas involved.

As illustrated in Fig. 1, the goal of the Solution Design phase is to design and assemble the final IT solution based on the discussions and requirements gathered in the “Advisory” phase; the outcome the Solution Design phase will then feed the “Build” phase where the solution is actually built and inserted in the client’s IT landscape and the “Operate” phase where the designed solution is put in production and operated according to the agreed SLAs.
Fig. 1

Service delivery lifecycle

Some key personas mentioned below are relevant for the Solutioning phase:
  • The Technical Solution Manager (“Adam”) is responsible for mapping client requirements and baselines (volumes) into a complete solution and ensuring the right price point is met

  • The domain Subject Matter Expert (“Jeremy”) is in charge of defining architectural patterns, blueprints, and best practices for his own specific domain (e.g. Storage, Network, Change Mgmt., …). Jeremy also acts as primary “information source” for Adam during solution definition.

  • The Client provides requirements (generally in textual documents)

Figure 2 describes the steps that are traditionally taken during the Solution Design phase in the Service Provider Industry. It is worth noting that while the described steps pertain to IT Service Delivery, a similar process is found for delivery of services in other industries, e.g. the construction industry.
Fig. 2

Current steps and actors in the Solutioning today

The process is typically initiated by the client issuing a “Request for Proposal” (RFP) accompanied by a set of documents which contain the client’s point of view concerning the IT services to be provided as well as the detailed requirements (functional and nonfunctional, i.e. constraints) that must characterize the solution itself. The information contained in these documents is largely unstructured and quite often needs a lot of clarifications in a quite long convergence process.

In a subsequent step, Adam, the Technical Solution Manager searches for relevant information on the provider’s service delivery capabilities to fulfill the client’s requirements. This step relies a lot on tacit knowledge that is scattered, neither codified not mapped. In most cases, success depends on the skill of the individual practitioner and/or on her ability to connect with the relevant players and SMEs. Moreover, the solution design suffers from a lack of data-driven evidence and references to similar solutions that are already in the Operate phase of the life cycle.

In addition to solution creation depending on the individual’s knowledge and network, tool support is fragmented and disconnected. As a result, process execution is very waterfall and sequential. Issues or changes requested by clients in at one step generally require a “back to square one”. This greatly limits true innovation in the arrangement of the solution (tradeoff assessments, what-if analysis, optimization) and severely inhibits the degree of collaboration or even co-creation with the client himself.

Applying AI technologies helps to fundamentally transform this process and to overcome the described challenges. The “Cognitive Solution Designer” tool (CogSol) enables Adam, the Technical Solution Manager, to collaborate with the client in an agile way from RFP to solution and contract (Fig. 3).
Fig. 3

The optimized state enabled with the Cognitive Solution Designer

In particular, the new process enables:
  • Speed and business agility: The entire flow is basically now transformed into a set of micro-feedback loops that build one on top of the other along a very agile construct. This ultimately leads to shorter solutioning times and a far better way to adjust to sudden and unexpected business conditions (eg. new requirements, security threats, etc.).

  • Continuous collaboration across the major parties and stakeholders: Every persona (e.g. the solutioner, the architect, the client, …) has the right context to collaborate and contribute; iteration and innovation are fostered and supported by the Cognitive Solutioning Knowledge base.

  • Continuous learning: The system is constantly learning; for example, the knowledge acquired in previous engagements makes the system perform better in following iterations. In this way, the huge set of data available in IBM, for example architectural blueprints, solution building blocks, data coming from the universe of client implementations etc. could be fully harvested for new solutions.

  • Social curation: We assert that it is critical to truly leverage the technical community as “social curator” of the entire Cognitive Solutioning Knowledge base to be able to capitalize on the important expertise and ‘real life’ experience that every practitioner is gathering every single day and that would be otherwise lost or relegated to scattered and anecdotal conversations.

  • Client co-creation and intimacy: We have observed that the benefits coming from the Cognitive Solutioning approach go well beyond merely the Solutioning phase and are completely changing the game in terms of client relationship as well. This allows a true co-creation with the client and a deeper level of interaction and understanding of the client landscape.

In the following we describe in detail the technical challenges and their AI-based solution in the Cognitive Solution Designer Tool.

Extraction and Topical Classification of Requirement Statements from Client Documents

As a first step, in the Cognitive Solution Designer tool, the service types and attributes requested by the client need to be identified in and extracted from the text documents as well as topically classified. For this, client documents of different file types are loaded into the system and transformed into a structure-preserving sentence-level representation for extraction, i.e. for each sentence its association with a subsection, section or list is preserved. Subsequently, functional requirements (“which services?”), nonfunctional requirements (“under which constraint?”) and baselines (“volumes for the service”) are extracted and topically categorized.


The accuracy of the proposed solution critically depends on the initial requirements extraction step and topical classification step: a mediocre level of accuracy not only negatively affects the downstream solutioning process but it also diminishes the confidence of the practitioner.

Challenge 1

Identification and extraction of sentences containing requirements. Typically, the client requirements are distributed over tens of RFP documents mixed with descriptions of the client’s current IT landscape, details about the RFP response process the prospective provider needs to follow, as well as details on contractual terms and conditions. Table 1 contains example sentences from real RFP documents, where the first three sentences contain technical requirements while the second three do not. The example illustrates that no simple method that e.g. would identify certain independent verbs or modal verbs would be highly accurate.
Table 1

Example sentences from RFP documents

The client looks to the service provider to perform all remote operations services, such as common software image development, anti-virus, patch management and other associated services.

Provide requirements for high-level dashboard performance monitoring, including real-time and historical analysis

Establish system monitoring thresholds (e.g., utilization, capacity) to track data center system components (servers, storage, databases, etc.) performance in accordance with service levels

The client expects the supplier to provide a realistic approach to meeting the timeline, and will be evaluated on their ability to execute that approach.

Collaborative computing services are the activities associated with supporting collaborative tools (e.g., MS exchange/outlook).

All RFP responses must be provided no later than the date and time specified in Table x.

Challenge 2

Topical classification of requirement statements. In order to identify matching service capabilities, the extracted requirements statements need to be topically classified. Several aspects make this classification problem especially challenging, yet it is expected that this challenge is present in many requirement-to-delivery capability matching scenarios. First of all, to match the appropriate capability to the requirement we need fine-grained classes. In particular, we have about 20 classes at the first level, e.g. “Enterprise Security”, and about 300 classes across the first and second categorization levels, e.g. “Enterprise Security – Identity and Access”. Consequently, large numbers of training data are needed for any supervised classification approach. However, unlike in other domains, like movie sentiment classification or labeling of pictures, such training data did not exist prior to this effort. Training data needs to be created for the purpose of the project and involves a massive labeling effort. Third, classification of a sentence or a functional requirement needs to be done in the context of a paragraph, subsection, or list, as the information contained in the statement itself sometimes proves to be insufficient.

Solution Overview

Identification and extraction of sentences containing requirements

As mentioned in the challenge section, identification of requirements merely based on the choice of independent verbs is insufficient. However, verbs do contain a strong differentiation between requests and definitions and descriptions. Consequently, we follow the identification approach described in [1]: identification of sentences with (1) verbs indicative of a request for service and (2) learned linguistic features for refinement. A third step to the approach in [1] was added comprising a final filtering using binary classifier to reduce false positive identification.

Step (1) is solely based on a positively-labeled set of sentences, i.e. sentences containing requirements statements. From these data sets, we determine the independent verbs and store them as <responsibility verb|true>. If a verb encloses other verbs, it is stored as <responsibility verb|false> unless there exists at least one instance of the verb as independent in which case it is stored as <responsibility verb|true>. This algorithm creates an inclusive set of responsibility verbs, i.e. a sentence containing a responsibility verb is a necessary but not sufficient condition for it being identified as a requirement sentence.

In the second step, the linguistic features indicative of requirements are learned. The training set contains all labeled sentences with responsibility verbs. Feature candidates consist of ordered sequences of POS tags, semantic role features of the verb of interest [2], and language token types. It is immediately clear, that a feature space created in this manner is very large and sparse. Thus we have chosen a Winnow classifier which is theoretically guaranteed to quickly converge to a correct hypothesis in a setting with many irrelevant features, no label noise, and a linear separation of the classes [3]. It belongs to the family of online learning algorithms thus also enabling immediate updates from feedback.

In the third and final step, we prune false positives from the requirement candidate set. The majority of such false positive identifications are requirement statements which do not pertain to a requested technical service but to the way the service provider needs to submit the offer, e.g. the last statement in Table 1 They use a similar set of independent verbs and have a similar structure and will thus be identified as requirements in steps 1 and 2. Yet they are irrelevant for selecting the correct technical capability for the solution. We have found that training a final binary classifier e.g. gradient boosting machine on bag of word representation of the section title is most efficient and effective to distinguish technical from process requirements.

Table 2 shows the results for the trained extraction method tested on three documents with a total of 810 sentences, out of which 68% contained requirements.
Table 2

Precision and Recall for Requirement Extraction





Topical classification of requirement statements:

To address these difficulties, our approach contains the following elements:
  1. 1.

    To bootstrap the classification, together with senior SMEs, we developed a lexicon with single keywords and multi-word key phrases for all classes in the 2-level class hierarchy. In the initial version of the tool, sentences were classified using this lexicon and by applying a rich set of rules to evaluate position-based occurrence of the lexicon key phrases with the learned weights for these rules.

  2. 2.

    To continuously improve the accuracy, we enable the user to add/remove/change tool-identified requirements and their classifications and thus provide feedback on the tool-generated classification. These manual corrections serve the immediate purpose of correcting the identified requirements for the RFP documents in process. The corrected and validated set will subsequently be used to select the matching service capabilities. At the same time, the user feedback generates new training samples to build a training data set sufficient for supervised classifiers and improve the accuracy.

  3. 3.

    To boost the user experienced accuracy and focus the user feedback in an active-learning manner, similar to [4], we leave sentences unclassified where classification confidence is low.

  4. 4.

    To switch from a manually-created and maintained lexicon we evaluated a number of supervised machine learning methods and found convolutional neural networks (CNNs) to perform better than additive tree models (ATM) training using a bag of words (BOW) representation in the domain embedding space even with smaller training data sets.

  5. 5.

    As discussed, in some cases, the information contained in the sentence itself is insufficient to yield an accurate enough classification. If the confidence is below a threshold, the classification of surrounding sentences belonging to the same subsection as well as that of the subsection title is taken into account. This approach leverages the structure-preserving parsing of incoming documents. All parameters of this approach are optimized during the model selection phase.

Figure 4 shows a comparison of the precision and recall for the six most important first-level classes achieved with the lexicon-based classification compared to the convolutional neural network based classification after growing the training set to 4500 labeled instances for all first-level classes. For recall, the CNN approach outperforms for all six classes, indicating the much richer representation of class essence beyond keywords and key phrases. For precision, the CNN based classifier performs better or equal for five out of the six classes.
Fig. 4

Comparison of precision and recall

Figure 5 shows the learning curves for three selected classes. As can be immediately seen, the scope and (thus) variance of textual representations belonging to a class differ dramatically between classes and thus the training data needs to reach sufficient precision and recall. For instance, for Class 1, a little over 200 labeled training sample are sufficient to reach high precision and recall (>90%), while yielding much lower precision and recall for the other two classes. In addition to discovering the level of training needs, we can also observe two other effects: (a) training data covering only a subset of the class and (b) noisy training data. The latter effect can be observed for Class 5, where recall reaches about 90%, yet, the precision is low and stagnant, indicating a noisy training data set. We will further discuss the approaches to clean ground-truth data sets to reduce the labeling noise in the continuous learning on the job deployment in the solution to the third challenge.
Fig. 5

Learning curves for selected classes


For the requirements extraction and classification, we strongly focused on creating a viable first version and learning on the job. To this end, the first deployed instances of the tool could be partly based on applying rules for extraction and classification. Yet, even in the very first version, the user would be guided to perform selected corrections with the dual purpose of creating a validated set of requirements for the downstream solution creation and the creation of a larger training set. As soon as the training set becomes large enough to yield sufficient performance with supervised methods, these could be put into the field and trained directly from subsequent user feedback for continuous improvement. In our view, the pragmatic combination of rule/regular expression based methods and best in class neural network algorithms allows to overcome the initial lack of training data and excessive cost of dedicated label creation.

Matching Client Requirements to Service Capabilities


The identification and topical categorization of requirements statements as described in the previous section is a valuable first step for identifying the service capabilities that are candidates for fulfilling individual client requests. As an example, Fig. 6 shows three functional requirements identified from an RFP document and classified as Cloud Services. Consequently, different service bundles, which we term Offerings, for cloud management become candidates to fulfill these requests. However, to truly decide whether the Offering capabilities fulfill any given requirements in this topical area, further analysis is needed.
Fig. 6

Identified and topically categorized functional requirements

Solution Overview

To achieve a deeper comparison between client requests and service capabilities bundled in Offerings, we further distinguish between two types of requirements: (1) functional requirements, i.e. “what” types of services are requested, and (2) non-functional requirements, i.e. “how” these services should be delivered. Nonfunctional requirements include constraints like regulations and delivery countries and design points like service level agreements on availability or resolution times for issues. While the textual and conceptual representation of the functional requirements is varied and complex, nonfunctional requirements can be described through fewer concepts and thus lend themselves to a structured representation.

In Fig. 7 we show the structured representation for service level agreements together with three example statements. Once such structured representations are created for the client’s non-functional requirements and – if not available already – in the same manner for the service capabilities, it becomes straightforward to assess the compatibility of the capabilities with the request. For an SLA, type needs to be identical, scope of the capabilities needs to include scope of the request, and percentage of availability of the service capability needs to exceed that of the request, while response time capability needs to be lower than that in the request. Synonyms, hyponyms, and hypernyms for type and scope can be taken into account during the reasoning and/or the extraction into the structured format to make the comparison less dependent on the choice of words.
Fig. 7

Structured SLA Model and example SLA statements

In the following we first describe the methods used for the extraction of nonfunctional requirements into structured representations for a deeper comparison described above. Subsequently, we discuss solutions to the open questions around deeper comparisons for functional requirements.

There is a wide range of literature on Software Requirement extraction from natural language that shares a number of commonalities with our problem. We have decided to build on the approach of [5] to parse the non-functional requirements and in particular Service Level Agreements with Ontology-based semantic role labeling. The underlying Ontology is shown in Fig. 8. As in [5], in the first step, we instantiate the words in the ontology concepts SLA and type. This is followed by the detection of words and phrases that are related to the SLA and type instances. These are subsequently classified into the property class, i.e. scope or severity.
Fig. 8

Ontology for service level agreements

The first step relies largely on the word forms and work lemmata as well as the POS tag and its word vector, while word forms, POS tag, and dependency between words are most important to identify the other attributes of an SLA.

For the assessment of functional requirements, it is rather infeasible to parse them into an ontology to assess whether the service bundles have the right capabilities. Conceptually, text entailment methods applied to requirement statements from RFPs and capabilities from the service Statements of Work are most promising. However, these are based on training deep neural networks and thus require large amounts of training data that is expensive to create from scratch. We consequently opted for an intermittent step of equipping the user to assess the fit for purpose and while doing so creating training data for future text entailment models. In particular, we plan to determine the most similar capability statement for each extracted functional requirement for visual assessment and user feedback.


Unlike in the case of Software Requirement parsing, requirement statements in the IT Services RFPs – and in other complex service relations e.g. construction – can be very diverse and the underlying concepts very complex. We have thus opted to not attempt a full parsing of both functional and non-functional requirements after extraction and topical classification. Rather we have limited the adoption of SRL for the constraints and design points contained in the non-functional requirements and plan to leverage text entailment models to assess whether capability statements entail, i.e. are compatible with service requests.

Social Curation and Continuous Learning


The actual business value provided by the tool is of course very dependent on the ability of the Cognitive Solution designer tool to effectively capture the requirements and be able to deal with the various nuances that are present in every client driven documentation and that may vary depending on the specific service area or even on the client geography.


During the early test phases, it became immediately clear that the standard “una tantum” training approach was largely insufficient and we had to augment it with a “continuous learning” mechanism. We needed to make the “learning process” dynamic and continuous so the Cognitive Solution Designer tool can continuously improve and better adapt itself to changing business conditions. In other words, we had to find a way to elevate the “training phase” of the Cognitive Solution designer tool: from being a “one off” phase performed in the lab typically at the beginning of the project to become an integral part of the e2e DevOps process for the tool under the “continuous improvement” cycle.


Services organizations, like IBM Global Services, are large and have many solution architects, so it is in general very difficult to unify the entire knowledge and expertise of such a technical community. Therefore, the second challenge resides in the ability for Cognitive Solution Designer to have the technical community as primary actor behind the “continuous learning and improvement” cycle – not just for extracting and topically classifying clients’ requirements but also to select the best matching offerings and services. In this way, the Cognitive Solution Designer becomes not only a tool to dramatically streamline the solution lifecycle but also a powerful catalyst to federate and organize in a cohesive and machine consumable knowledge base the entire set of service expertise that would have been otherwise scattered and confined just to people heads.


The great value of the “Social Curation” approach mentioned in the previous challenge may be severely undermined if we do not also include a strong governance system that “accepts” user feedback which will ultimately lead to a better accuracy level for the Cognitive Solution Designer tool and automatically reject annotations that would produce a “quality drift”.

Solution Overview

We address the challenges outlined above with three solution concepts: (1) capture user feedback pervasively, (2) design the appropriate incorporation of the user feedback, (3) design methodologies to prevent quality drift through either poor or malicious training feedback.

Capture feedback pervasively

User feedback is essential to create an ever improving and evolving system. Yet, explicit feedback from users might be hard to motivate if it involves a large effort from the user. Thus, we designed and redesigned our user feedback mechanism aiming at a perfect balance of minimalism and usefulness.

Figure 9 shows a screenshot of the document viewer with the overlaid tool created annotations for the client requirements. It takes just two clicks for the user to correct them for the subsequent matching of requirements and capabilities and each such correction is used as a new training instance for continuous improvement. Similarly, accepted tool extractions and classifications are leveraged as new training examples. Similarly, in the decision trees that request user input to refine the selection of appropriate services with strategic considerations, we allow users to use a comment field to add further considerations based on their experience.
Fig. 9

Document viewer with text annotations and pop-up window for correcting tool created annotations

Continuous learning

The continuous feedback collection described above allows for the continuous improvement/continuous learning in all machine learning and knowledge driven decision support models in the tool. However, the update strategy and learning algorithms depend on the model specifies. For the requirement extraction where we use an online-learning scheme, updates to the machine learning algorithm will be performed incrementally and near-real time. For requirements classification where we use a deep-learning based approach, updates are performed regularly by retraining the network. Going forward, to increase the computational efficiency, we are looking into applying incremental learning approaches that have been recently developed for deep neural networks [7]. Another aspect of continuous learning based on user feedback is the prevention of drift or degradation of the training corpus. Here we are applying and extending existing approaches to clean ground-truth data set [5, Agarwal 2007] to reduce the labeling noise and keep it low in the continuous learning on the job deployment.


The ability to capture “implicit” user feedback (typically monitoring user undo or retries) in order to further tune the underlying engine is today becoming more and more common in AI systems (typically chatbots).

With the “social curation” approach we have implemented for the Cognitive Solution designer tool we have moved one step further: we solicit and incentivize the technical community to contribute and augment the overall knowledge and company expertise in solution design in an open and highly collaborative fashion. This has not only produced better business outcomes, but has also become a motivating factor for the solution architect workforce who find personal reward in contributing to the success of the overall organization.


Architectural considerations

To better understand the architectural choices adopted in CogSol, it is important to outline the key design points that CogSol has to address:
  • Coverage of a quite extensive and broad set of scenarios, personas and “interaction styles”. In fact, as emerged from the various Design Thinking sessions we had at the beginning of the project, the set of user scenarios that CogSol must realize covers a very diverse set of use cases (requirement mapping, solution assembly, data curation, knowledge management, etc.) involving a set of personas with very different goals and professional backgrounds (solution architects, domain subject matter experts, deal manager, etc.)

  • Main realization of the “outer loop” for the entire Platform. CogSol has been designed not only as a key application that would streamline the entire solutioning phase but also the core component of the “outer Loop” for the entire Platform that would provide “knowledge driven service improvement” capabilities to all the rest of consumable services hosted on the Platform.

  • Data driven approach. As described above the very large majority of use cases in CogSol rely on data manipulation and transformation (for example ingestion of RFPs / RFS into the system or annotation of data corpus). CogSol must provide a very efficient way to handle these use cases both from a “quality of service” standpoint (e.g. performance, latency, availability, etc.) as well as from a “data semantic” perspective (eg. domain entity recognition, data abstraction, data composition in higher representations, etc.)

  • Knowledge management foundation for the Platform. CogSol needs to provide the basic primitives in terms of knowledge management for the rest of the Services Platform in terms of knowledge representation, knowledge manipulation and knowledge federation.

Without getting into low level implementation details we now want to describe the major architectural design elements implemented in CogSol that have enabled addressing the above mentioned “design points”.
  1. (a)

    Upfront definition of major scenario flows (with links to use cases and key personas) with articulation of the most relevant “interface” points and mapping with key high level architectural components.

    As an example the figure below represents the high level view for the key components and services concerning the RFP phase.

  1. (b)
    Adoption of an MVC (Model View Controller pattern) approach based on micro-services.
    • Definition of groups of related micro-services (especially the ones with similar responsibilities and working on adjacent data), together in higher level «services». Assignment of clear responsibilities and APIs to «services»

    • Leveraging the Kubernetes framework

    • Shielding the details of data-model and data structures through a data service that provides forward compatible REST APIs and data helpers.

    • Implementation of a flexible Data service that is a modular component that can be split into different data services that can be used by other components to provide data-persistence and retrieval function in a standard way

    • This approach has also enabled lot of reuse of existing components through simple refactoring


The following picture expands the RFP use cases shown above and presents the linkage with “micro-services groups” and data layer.

  1. (c)
    DevOps chain. We have defined an e2e DevOps chain to provide a continuous improvement process at two levels:
    1. 1.

      Delivery process: Continuous-integration build process, automatically triggered each time a developer pushes new code into a GitHub repository and then managed through the various DevOps chain stages till promotion into production; we are mainly leveraging UrbanCode as well as the core of the DevOps toolchain available in the Platform to manage this delivery process

    2. 2.

      Continuous knowledge refinement: CogSol is not only a typical aaS application that needs to live on the Platform and leverage the e2e DevOps continuous delivery process just described. CogSol also enables somewhat “continuous delivery and refinement of knowledge “powered by our practitioner community thru the annotation and “social curation” methods described in this paper. This is the reason why we have augmented our DevOps chain to include all the steps and mechanisms needed to keep our CogSol knowledge base always fresh and up to date



In the “Cognitive Solution Designer” tool, we have developed and applied AI technologies to fundamentally transform the IT Delivery solutioning process to enable speed and agility in understanding the clients’ requirements, co-creation of solutions with the client and continuous learning from the practitioners using the system. In order to create such a system, we solved and continue to solve NLP and machine learning challenges from identifying client requirements in large text documents to deep matching of such requirements with service capabilities to establishing efficient and robust continuous learning approaches.



We’d like to thank the whole “Cognitive Solution Designer” research and development team and all visionary supporters of this project.


  1. 1.
    Nezhad HRM, et al. eAssistant: cognitive assistance for identification and auto-triage of actionable conversations. WWW2017.Google Scholar
  2. 2.
    Màrquez L, Carreras X, Litkowski KC, Stevenson S (2008) Semantic role labeling: an introduction to the special issue. Comput Linguist 34(2):145–159CrossRefGoogle Scholar
  3. 3.
    Nigam K, Hurst M (2004) Towards a robust metric of opinion. In Proceedings of the AAAI Spring Symposium on Exploring Attitude and Affect in Text: Theories and Applications.Google Scholar
  4. 4.
    Bakis R, Connors DP, Dube P, Kapanipathi P, Kumar A, Malioutov D, Venkatramani C (2017) Performance of natural language classifiers in a question-answering system. IBM J Res Develop 61(4):14:1–14:10CrossRefGoogle Scholar
  5. 5.
    Roth M, Klein E ( 2015) Parsing software requirements with an ontology-based semantic role labeler. Proceedings of the 1st Workshop on Language and OntologiesGoogle Scholar
  6. 6.
    Käding C et al (2016) Fine-tuning deep neural networks in continuous learning scenarios, Asian Conference on Computer Vision. Springer, ChamGoogle Scholar
  7. 7.
    Sumeet A, et al. (2007) How much noise is too much: A study in automatic text classification. Data Mining, 2007. ICDM 2007. Seventh IEEE International Conference on. IEEE.Google Scholar

Copyright information

© The Author(s), under exclusive licence to Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Kristof Kloeckner
    • 1
  • John Davis
    • 2
  • Nicholas C. Fuller
    • 3
  • Giovanni Lanfranchi
    • 1
  • Stefan Pappe
    • 4
  • Amit Paradkar
    • 3
  • Larisa Shwartz
    • 3
  • Maheswaran Surendra
    • 5
  • Dorothea Wiesmann
    • 6
  1. 1.Global Technology ServicesIBM (United States)ArmonkUSA
  2. 2.Global Technology ServicesIBM (United Kingdom)HursleyUK
  3. 3.IBM Research DivisionIBM (United States)Yorktown HeightsUSA
  4. 4.Global Technology ServicesIBM (Germany)MannheimGermany
  5. 5.Global Technology ServicesIBM (United States)Yorktown HeightsUSA
  6. 6.IBM Research DivisionRüschlikonSwitzerland

Personalised recommendations