Encyclopedia of Machine Learning and Data Mining

2017 Edition
| Editors: Claude Sammut, Geoffrey I. Webb

Case-Based Reasoning

  • Susan Craw
Reference work entry
DOI: https://doi.org/10.1007/978-1-4899-7687-1_34

Abstract

Case-based reasoning (CBR) solves problems by retrieving similar, previously solved problems and reusing their solutions. The case base contains a set of cases, and each case holds knowledge about a problem or situation, together with its corresponding solution or action. The case base acts as a memory, remembering is achieved using similarity-based retrieval, and the retrieved solutions are reused. Newly solved problems may be retained in the case base and so the memory is able to grow as problem-solving occurs.

CBR reuses remembered experiences, where the experience need not record how the solution was reached, simply that the solution was used for the problem. The reliance on stored experiences means that CBR is particularly relevant in domains which are ill defined, not well understood, or where no underlying theory is available. CBR systems are a useful way to capture corporate memory of human expertise.

The fundamental assumption of CBR is that similar problems have similar solutions: a patient with similar symptoms will have the same diagnosis, the price of a house with similar accommodation and location will be similar, the design for a kitchen with a similar shape and size can be reused, and a journey plan is similar to an earlier trip. A related assumption is that the world is a regular place, and what holds true today will probably be true tomorrow. A further assumption relevant to memory is that situations repeat, because if they do not, there is no point remembering them!

Synonyms

Theory/Solution

Case-based reasoning (CBR) is inspired by memory-based human problem-solving in which instances of earlier problem-solving are remembered and applied to solve new problems. For example, in case law, the decisions in trials are based on legal precedents from previous trials. In this way, specific experiences are memorized, and remembered and reused when appropriate. This contrasts with rule-based or theory-based problem-solving in which knowledge of how to solve a problem is applied. A doctor diagnosing a patient’s symptoms may apply knowledge about how diseases manifest themselves, or she may remember a previous patient who demonstrated similar symptoms and thus apply a case-based approach.

CBR is an example of  lazy learning because there is no learned model to apply to solve new problems. Instead, the generalization needed to solve unseen problems happens when a new problem is presented and the similarity-based retrieval identifies relevant previous experiences.

Figure 1 shows the CBR problem-solving cycle proposed by Aamodt and Plaza (1994). A case base of Previous Cases is the primary knowledge source in a CBR system, with additional knowledge being used to identify similar cases in the Retrieve stage, and to Reuse and Revise the retrieved case. A CBR system learns as it solves new problems when a Learned Case is created from the New Case and its Confirmed Solution, and Retained as a new case in the case base.
Case-Based Reasoning, Fig. 1

CBR cycle (Adapted from Aamodt and Plaza 1994)

Aamodt and Plaza’s four-stage CBR cycle for problem-solving and learning is commonly referred to as the “Four REs” or “R4” cycle to recognize the following stages in Fig. 1:
  • Retrieve: The cases that are most similar to the New Case defined by the description of the new problem are identified and retrieved from the case base. The Retrieve stage uses the similarity knowledge container in addition to the case base.

  • Reuse: The solutions in the Retrieved (most similar) Cases are reused to build a Suggested Solution to create the Solved Case from the New Case. The Reuse stage may use the adaptation knowledge container to refine the retrieved solutions.

  • Revise: The Suggested Solution in the Solved Case is evaluated for correctness and is repaired if necessary to provide the Confirmed Solution in the Tested/Repaired Case. The Revise stage may be achieved manually or may use adaptation knowledge, but it should be noted that a revision to a Suggested Solution is likely to be a less demanding task than solving the problem from scratch.

  • Retain: The Repaired Case may be retained in the case base as a newly Learned Case if it is likely to be useful for future problem-solving. Thus the primary knowledge source for CBR may be added to during problem-solving and is an evolving, self-adaptive collection of problem-solving experiences.

This “Four REs” cycle simply Retained the Tested/Repaired Case as a new Learned Case. More recently, the Retain stage has been replaced with a Recycle-Retain-Refine loop of a “Six REs” cycle proposed by Gokër and Roth-Berghofer (1999) and shown in Fig. 2. Learned Cases are Recycled as potential new cases, the Retain step validates their correctness, before the Refine stage decides if the case should be integrated and how this should be done. The new case may be added, used to replace redundant cases, or merged with existing cases, and other case base maintenance may be required to maintain the integrity of the CBR system. The maintenance cycle is often executed separately from the problem-solving Application Cycle.
Case-Based Reasoning, Fig. 2

Six REs CBR cycle (Adapted from Gokër and Roth-Berghofer 1999)

Knowledge Containers

Case knowledge is the primary source of knowledge in a CBR system. However, case knowledge is only one of four knowledge containers identified by Richter (2009):
  • Vocabulary: The representation language used to describe the cases captures the concepts involved in the problem-solving.

  • Similarity knowledge: The similarity measure defines how the distances between cases are computed so that the nearest neighbors are identified for retrieval.

  • Adaptation knowledge: Reusing solutions from retrieved cases may require some adaptation to enable them to fit the new problem.

  • Case base: The stored cases capture the previous problem-solving experiences.

The content of each knowledge container is not fixed, and knowledge in one container can compensate for lack of knowledge in another. It is easy to see that a more sophisticated knowledge representation could be less demanding on the content of the case base. Similarly, vocabulary can make similarity assessment during retrieval easier, or a more complete case base could reduce the demands on adaptation during reuse. Further knowledge containers are proposed by others (e.g., maintenance by Gokër and Roth-Berghofer 1999).

Cases may be represented as simple feature vectors containing nominal or numeric values. A case capturing a whisky-tasting experience might contain features such as sweetness, peatiness, color, nose and palate, and the  classification as the distillery where it was made.

Sweetness

Peatiness

Color

Nose

Palate

Distillery

 

6

5

amber

full

medium dry

Dalmore

 

More structured representations can use frame-based or object-oriented cases. The choice of representation depends on the complexity of the experiences being remembered and is influenced by how similarity should be determined. Hierarchical case representations allow cases to be remembered at different levels of abstraction, and retrieval and reuse may occur at these different levels.

For  classification tasks, the case base can be considered to contain exemplars of problem-solving. This notion of exemplar confirms a CBR case base as a source of knowledge; it contains only those experiences that are believed to be useful for problem-solving. A similar view is taken for non-classification domains where the case base contains useful prototypes: for example, designs that can be used for redesign, plans for replanning, etc.

One of the advantages of CBR is that a case base is composed of independent cases that each captures some local problem-solving knowledge that has been experienced. Therefore, the “knowledge acquisition bottleneck” of many rule-based and model-based systems is reduced for CBR. However, the other knowledge containers provide additional knowledge acquisition demands that may lessen the advantage of CBR for some domains.

Retrieval

CBR retrieval compares the problem part of the new case with each of the cases in the case base to establish the distance between the new case and the stored cases. The cases closest to the new case are retrieved for reuse. Retrieval is a major focus of López de Mántaras et al.’s (2005) review of research contributions related to the CBR cycle.

Similarity- and distance-based neighborhoods are commonly used interchangeably when discussing CBR retrieval. Similarity and distance are inverses: the similarity is highest when the distance is close to 0, and the similarity is 0 when the distance is large. Several functions may be applied to define a suitable relationship between a distance d and a similarity s, including the following simple versions:
$$\displaystyle\begin{array}{rcl} \mbox{ Inverse: }s& =& \frac{1} {d + 1} {}\\ \mbox{ Linear: }s& =& 1 - d\ \mbox{ for normalized }d\mbox{ (}0\,\leq \,d\,\leq \,1\mbox{ ) } {}\\ \end{array}$$
It is common to establish the distance between each pair of feature values and then to use a distance metric, often Euclidean or  Manhattan distance (see also  Similarity Measures), to calculate the distance between the feature vectors for the New and Retrieved Cases. The distance between two numeric feature values v and w for a feature F is normally taken to be the distance between the normalized values:
$$\displaystyle{d(v,w) = \frac{\mid v - w\mid } {F_{\mathrm{max}} - F_{\mathrm{min}}}}$$
where Fmax/Fmin are the maximum/minimum values of the feature F.
For nominal values v and w, the simplest approach is to apply a binary distance function:
$$\displaystyle{d(v,w) = \left \{\begin{array}{@{}ll} 0&\mbox{ if }v = w\\ 1 &\mbox{ otherwise} \end{array} \right .}$$
For ordered nominal values, a more fine-grained distance may be appropriate. The distance between the ith value v i and the jth value v j in the ordered values v1, v2, , v n may use the separation in the ordering to define the distance:
$$\displaystyle{d(v_{i},v_{j}) = \frac{\mid i - j\mid } {n - 1}.}$$
Extending this to arbitrary nominal values, a distance matrix D may define the distance between each pair of nominal values by assigning the distance d(v i , v j ) to d ij . Alternatively there may be background knowledge in the form of an ontology or concept hierarchy where their depth D in the structure compared to the depth of their least common ancestor (lca) can provide a measure of separation:
$$\displaystyle{d(v_{i},v_{j}) = \frac{D(v_{i}) + D(v_{j})} {2D(\mathrm{lca})} }$$

Returning to the whisky-tasting example, suppose sweetness and peatiness score values 0–10, color takes ordered values {pale, straw, gold, honey, amber}, palate uses binary distance, and nose is defined by the following distance matrix:

Nose Distance Matrix

 

Distances

peat

fresh

soft

full

 

peat

0

0.3

1

0.5

 

fresh

0.3

0

0.5

0.7

 

soft

1

0.5

0

0.3

 

full

0.5

0.7

0.3

0

 

Dalmore whisky above can be compared with Laphroaig and The Macallan as follows:

Sweetness

Peatiness

Color

Nose

Palate

Distillery

 

2

10

amber

peat

medium dry

Laphroaig

 

7

4

gold

full

big body

The Macallan

 
The Manh distances are:
$$\displaystyle\begin{array}{rcl} d(\mbox{ Dalmore, Laphroaig})& =& 0.4\,+\,0.5\,+\,0\,+\,0.5 {}\\ & & +\,0\,=\,1.4; {}\\ d(\mbox{ Dalmore, The Macallan})& =& 0.1+0.1+0.5+0 {}\\ & & +\,1\,=\,1.7. {}\\ \end{array}$$
Taking all the whisky features with equal importance, Dalmore is more similar to Laphroaig than to The Macallan.
In situations where the relative importance of features should be taken into account, a weighted version of the distance function should be used; for example, the weighted Manhattan distance between two normalized vectors x = (x1, x2, … x n ) and y = (y1, y2, … y n ) with weight w i for the ith feature is
$$\displaystyle{d(\mathbf{x,y}) = \frac{\sum _{i=1}^{n}w_{i}\mid x_{i} - y_{i}\mid } {\sum _{i=1}^{n}w_{i}} }$$
In the example above, if Peatiness has weight 4 and the other features have weight 1, then the weighted Manhattan distances are:
$$\displaystyle\begin{array}{rcl} d(\mbox{ Dalmore, Laphroaig})& =& (0.4\,+\,4\,\times \,0.5\,+\,0 {}\\ & & +0.5+0)/8=0.36; {}\\ d(\mbox{ Dalmore, The Macallan})& =& (0.1+4\times 0.1+0.5 {}\\ & & +0 + 1)/8=0.25. {}\\ \end{array}$$
Therefore, emphasizing the distinctive Peatiness feature, Dalmore is more similar to The Macallan than to Laphroaig.

The similarity knowledge container contains knowledge to calculate similarities. For simple feature vectors, a weighted sum of distances is often sufficient, and the weights are similarity knowledge. However, even our whisky-tasting domain had additional similarity knowledge containing the distance matrix for the nose feature. Structured cases require methods to calculate the similarity of two cases from the similarities of components. CBR may use very knowledge-intensive methods to decide similarity for the retrieval stage. Ease of reuse or revision may even be incorporated as part of the assessment of similarity. Similarity knowledge may also define how  missing values are handled: the feature may be ignored, the similarity may be maximally pessimistic, or a default or average value may be used to calculate the distance.

A CBR case base may be indexed to avoid similarity matching being applied to all the cases in the case base. One approach uses kd trees to partition the case base according to hyperplanes.  Decision Tree algorithms may be used to build the kd tree by using the cases as training data, partitioning the cases according to the chosen decision nodes and storing the cases in the appropriate leaf nodes. Retrieval first traverses the decision tree to select the cases in a leaf node, and similarity matching is applied to only this partition. Case Retrieval Nets are designed to speed up retrieval by applying spreading activation to select relevant cases. In Case Retrieval Nets, the feature value nodes are linked via similarity to each other and to cases. Indexes can speed up retrieval but they also preselect cases according to some criteria that may differ from similarity.

Reuse and Revision

Reuse may be as simple as copying the solution from the Retrieved Case. If k nearest neighbors are retrieved, then a vote of the classes predicted in the retrieved cases may be used for  classification, or the average of retrieved values for  regression. A weighted vote or weighted average of the retrieved solutions can take account of the nearness of the retrieved cases in the calculation. For more complex solutions, such as designs or plans, the amalgamation of the solutions from the Retrieved Cases may be more knowledge intensive.

If the New Case and the Retrieved Case are different in a significant way, then it may be that the solution from the Retrieved Case should be adapted before being proposed as a Suggested Solution. Adaptation is designed to recognize significant differences between the New and Retrieved Cases and to take account of these by adapting the solution in the Retrieved Case.

In classification domains, it is likely that all classes are represented in the case base. However, different problem features may alter the classification and so adaptation may correct for a lack of cases. In constructive problem-solving like design and planning, however, it is unlikely that all solutions (designs, plans, etc.) will be represented in the case base. Therefore, a retrieved case suggests an initial design or plan, and adaptation alters it to reflect novel feature values.

There are three main types of adaptation that may be used, as part of the reuse step to refine the solution in the Retrieved Case to match better the new problem, or as part of the revise stage to repair the Suggested Solution in the Solved Case:
  • Substitution: Replace parts of the retrieved solution. In Hammond’s (1990) CHEF system to plan Szechuan recipes, the substitution of ingredients enables the requirements of the new menu to be achieved. For example, the beef and broccoli in a retrieved recipe are substituted with chicken and snowpeas.

  • Transformation: Add, change, or remove parts of the retrieved solution. CHEF adds a skinning step to the retrieved recipe that is needed for chicken but not for beef.

  • Generative Adaptation: Replay the method used to derive the retrieved solution. Thus the retrieved solution is not adapted but a new solution is generated from reusing the retrieved method for the new circumstances. This approach is similar to reasoning by analogy.

CHEF also had a clear REVISE stage where the Suggested Solution recipe was tested in simulation and any faults were identified, explained, and repaired using repair templates for different types of explained failures. In one recipe a strawberry soufflé was too liquid, and one repair is to drain the strawberry pulp, and this transformation adaptation is one REVISE operation that could be applied.

The adaptation knowledge container is an important source of knowledge for some CBR systems, particularly for design and planning, where refining an initial design or plan is expected. Acquiring adaptation knowledge can be onerous, and learning adaptation knowledge from the cases in the case base or from background knowledge of the domain has been effective (Craw et al. 2006; Jalali and Leake 2013).

Retention and Maintenance

The retention of new cases during problem-solving is an important advantage of CBR systems. However, it is not always advantageous to retain all new cases. The  Utility Problemthat the computational benefit from additional knowledge must not outweigh the cost of applying it – in CBR refers to cases and the added cost of retrieval. The case base must be kept “lean and mean,” and so new cases are not retained automatically, and cases that are no longer useful are removed. New cases should be retained if they add to the competence of the CBR system by providing problem-solving capability in an area of the problem space that is currently sparse. Conversely, existing cases should be reviewed for the role they play, and forgetting cases is an important maintenance task. Existing cases may contain outdated experiences and so should be removed, or they may be superseded by new cases.

Case base maintenance manages the contents of the case base to achieve high competence. Competence depends on the domain and may involve:
  • quality of solution;

  • user confidence in solution; or

  • efficiency of solution prediction (e.g., speed-up learning).

As a result, the RETAIN step in Aamodt and Plaza’s (1994) “four REs” problem-solving cycle is normally replaced by some form of case base maintenance cycle, such as the ReCycle-Retain-Refine loop in Gokër and Roth-Berghofer’s (1999) “six REs” cycle.

Case base maintenance systems commonly assume that the case base contains a representative sample of the problem-solving experiences. They exploit this by using a leave-one-out approach where repeatedly for each case in the case base, the one extracted case is used as a new case to be solved, and the remaining cases become the case base. This enables the problem-solving competence of the cases in the case base to be estimated using the extracted cases as representative new cases to be solved. Various researchers build a competence model for the case base by identifying groups of cases with similar problem-solving ability and use this model to underpin maintenance algorithms that prioritize cases for deletion and to identify areas where new cases might be added.

There are several trade-offs to be managed by case base maintenance algorithms: larger case bases contain more experiences but take longer for retrieval; smaller case bases are likely to lack some key problem-solving ability; cases whose solution is markedly different from their nearest neighbors may be noisy or may be an important outlier. The competence of a case depends on other knowledge containers, and so case base maintenance should not proceed in isolation.

CBR Applications and Tools

Two notable successful deployed applications of CBR are Verdande’s Drilledge that monitors oil-well drilling operations to reduce non-productive time (Gundersen et al. 2013), and General Electric’s FormTool for plastic color matching (Cheetham 2005). Many more applications are described in the Fielded Applications of CBR article in Knowledge Engineering Review 20(3) CBR Special Issue (2005) and Montani and Jain’s Successful Case-Based Reasoning Applications texts (Springer, 2010 & 2014):
  • Classification – Medical diagnosis systems include SHRINK for psychiatry, CASEY for cardiac disease, and ICONS for antibiotic therapy for intensive care. Other diagnostic systems include failure prediction of rails for Dutch railways, Boeing’s CASSIOPÉE for trouble-shooting aircraft engines, and the HOMER Help-Desk (Gokër and Roth-Berghofer 1999).

  • Design – Architectural design was a popular early domain: ARCHIE and CADsyn. Other design applications include CADET and KRITIK for engineering design, pharmaceutical tablet formulation, Déjà Vu for plant control software, and Lockheed’s CLAVIER for designing layouts for autoclave ovens.

  • Planning – PRODIGY is a general purpose planner that uses analogical reasoning to adapt retrieved plans. Other planning applications include PARIS for manufacturing planning, mission planning for US navy, and route planning for DaimlerChrysler cars. A recent focus is planning in simulated complex environments as found in Real-Time Strategy Games (Jaidee et al. 2013; Ontañón and Ram 2011; Wender and Watson 2014). CBR has other Game AI applications including robot soccer and poker.

  • Textual CBR – Legal decision support systems were an important early application domain for textual CBR, including HYPO, GREBE, and SMILE. Question answering was another fruitful text-based domain: FAQ-Finder and FA11Q. More recently, textual CBR is used for industrial decision support based on textual reports; e.g., incident management and Health & Safety.

  • Conversational CBR – Conversational systems extract the problem specification from the user through an interactive case-based dialogue. Examples include help-desk support, CBR Strategist for fault diagnosis, and Wasabi and ShowMe product recommender systems.

  • Recommender Systems – There has been a large growth in the use of CBR for recommendation of products, travel planning, and online music. Current topics include preference recommenders for individuals and groups (Quijano-Sánchez et al. 2012) and sentiment/opinion mining from social media to improve personalization (Dong et al. 2014).

  • Workflows – A recent interest in process-oriented CBR has used the CAKE Collaborative Agile Knowledge Engine to create office workflows (Minor et al. 2014). Other applications include science workflows, medical pathways, modeling interaction traces, and recipes. These applications use structured cases and demand knowledge-rich adaptation for reuse. An annual Computer Cooking Competition at recent ICCBR conferences has encouraged the development of various case-based recipe systems including Taaable, JADAWeb, CookIIS, ChefFroglingo, GoetheShaker (cocktails), and EARL (sandwiches).

There are two main open-source CBR tools: myCBR and Colibri. Both provide state-of-the-art CBR functionality, and Colibri also incorporates a range of facilities for textual CBR. The myCBR tool originated from the INRECA methodology, and its website www.mycbr-project.net offers downloads, documentation, tutorials, and publications. Similar Colibri information is available at gaia.fdi.ucm.es/research/colibri, with the jColibri framework also available from www.sourceforge.net. Empolis is one of the pioneers in CBR with CBR Works being one of the first commercial CBR tools. It is now part of Empolis’ Information Access System, and is available at www.empolis.com.

Future Directions

The drivers for ubiquitous computing – wireless communication and small devices – also affect future developments in CBR. The local, independent knowledge of case bases makes mobile devices ideal to collect experiences and to deliver experience-based knowledge for reuse.

Textual CBR systems are becoming increasingly important for extracting and representing knowledge captured in textual documents. This is particularly influenced by the availability of electronic documents in the Web and social media as sources of data for the extraction of representation knowledge. They also provide background knowledge from which to learn knowledge for similarity and adaptation containers.

Cross-References

Recommended Reading

  1. Aamodt A, and Plaza E (1994) Case-based reasoning: foundational issues, methodological variations, and system approaches. AI Commun 7:39–59. citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.39.1670Google Scholar
  2. Cheetham W (2005) Tenth anniversary of the plastics color formulation tool. AI Mag 26(3):51–61 www.aaai.org/Papers/Magazine/Vol26/26-03/AIMag26-03-007.pdf
  3. Craw S, Wiratunga N, Rowe RC (2006) Learning adaptation knowledge to improve case-based reasoning. Artif Intell 170(16–17):1175–1192. doi:10.1016/j.artint.2006.09.001MathSciNetMATHCrossRefGoogle Scholar
  4. Dong R, Schaal M, O’Mahony MP, McCarthy K, Smyth B (2014) Further experiments in opinionated product recommendation. In: Lamontagne L, Plaza E (eds) Proceedings of the 22nd international conference on case-based reasoning, Cork. LNAI, vol 8765. Springer, Berlin/Heidelberg, pp 110–124. doi:10.1007/978-3-319-11209-1_9Google Scholar
  5. Gokër MH, Roth-Berghofer T (1999) The development and utilization of the case-based help-desk support system HOMER. Eng Appl Artif Intell 12:665–680. doi:10.1016/S0952-1976(99)00037-8CrossRefGoogle Scholar
  6. Gundersen OE, Sørmo F, Aamodt A, Skalle P (2013) A real-time decision support system for high cost oil-well drilling operations. AAAI AI Mag 34(1): 21–31. www.aaai.org/ojs/index.php/aimagazine/article/view/2434
  7. Hammond KJ (1990) Explaining and repairing plans that fail. Artif Intell 45(1–2):173–228CrossRefGoogle Scholar
  8. Jaidee U, Muñoz-Avila H, Aha DW (2013) Case-based goal-driven coordination of multiple learning agents. In: Delaney SJ, Ontanon S (eds) Proceedings of the 21st international conference on case-based reasoning, Saratoga Springs. LNAI, vol 7969. Springer, Berlin/Heidelberg, pp 164–178. doi:10.1007/978-3-642-39056-2_12Google Scholar
  9. Jalali V, Leake D (2013) Extending case adaptation with automatically-generated ensembles of adaptation rules. In: Delaney SJ, Ontanon S (eds) Proceedings of the 21st international conference on case-based reasoning, Saratoga Springs. LNAI, vol 7969. Springer, Berlin/Heidelberg, pp 188–202. doi:10.1007/978-3-642-39056-2_14Google Scholar
  10. López de Mántaras R, McSherry D, Bridge D, Leake D, Smyth B, Craw S, Faltings B, Maher ML, Cox MT, Forbus K, Aamodt A, Watson I (2005) Retrieval, reuse, revision, and retention in case-based reasoning. Knowl Eng Rev 20(3):215–240. doi:10.1017/S0269888906000646CrossRefGoogle Scholar
  11. Minor M, Bergmann R, Görg S (2014) Case-based adaptation of workflows. Inf Syst 40:142–152. doi:10.1016/j.is.2012.11.011CrossRefGoogle Scholar
  12. Ontañón S, Ram A (2011) Case-based reasoning and user-generated AI for real-time strategy games. In: Artificial intelligence for computer games. Springer, New York, pp 103–124. doi:10.1007/978-1-4419-8188-2_5CrossRefGoogle Scholar
  13. Quijano-Sánchez L, Bridge D, Díaz-Agudo B, Recio-García JA (2012) Case-based aggregation of preferences for group recommenders. In: Díaz-Agudo B, Watson I (eds) Proceedings of the 20th international conference on case-based reasoning, Lyon. LNAI, vol 7466. Springer, Berlin/Heidelberg, pp 17–31. doi:10.1007/978-3-642-32986-9_25Google Scholar
  14. Richter MM (2009) The search for knowledge, contexts, and case-based reasoning. Eng Appl Artif Intell 22(1):3–9. doi:10.1016/j.engappai.2008.04.021CrossRefGoogle Scholar
  15. Wender S, Watson I (2014) Combining case-based reasoning and reinforcement learning for unit navigation in real-time strategy game AI. In: Lamontagne L, Plaza E (eds) Proceedings of the 22nd international conference on case-based reasoning, Cork. LNAI, vol 8765. Springer, Berlin/Heidelberg, pp 511–525. doi:10.1007/978-3-319-11209-1_36Google Scholar

Copyright information

© Springer Science+Business Media New York 2017

Authors and Affiliations

  1. 1.Robert Gordon UniversityAberdeenUK