Introduction to Clinical Natural Language Processing with Python

Background : Many of the most valuable insights in medicine are contained in written patient records. While some of these are coded into structured data as part of the record entry, many exist only as text. Although a complete understanding of this text is beyond current technology, a surprising amount of insight can be gained from relatively simple natural language processing. Learning objectives : This chapter introduces the basics of text processing with Python, such as name-entity recognition, regular expressions, text tokenization and negation detection. By working through the four structured NLP tutorials in this chapter, the reader will learn these NLP techniques to extract valuable clinical insights from text. Limitations : The ﬁeld of Natural Language Processing is as broad and varied as human communication. The techniques we will discuss in this chapter are but a sampling of what the ﬁeld has to offer. That said, we will provide enough basic techniques to allow the reader to start to unlock the potential of textual clinical notes.


Introduction
Natural Language Processing (NLP) is the ability of a computer to understand human language as it is spoken or written (Jurafsky and Martin 2009). While that sounds complex, it is actually something you've probably been doing a fairly good job at since before you were four years old.
Most NLP technology development is akin to figuring out how to explain what you want to do to a four-year-old. This rapidly turns into a discussion of edge cases (e.g., "it's not gooder; it's better"), and the more complicated the task (i.e., the more poorly structured the language you are trying to interpret) the harder it is. This is especially true if you are hoping that an NLP system will replace a human in reliably extracting domain specific information from free text.
However, if you are just looking for some help wading through potentially thousands of clinical notes a bit more quickly, you are in luck. There are many "4-year-old" tasks that can be very helpful and save you a lot of time. We'll focus on these for this chapter, with some examples.

Setup Required
This chapter aims to teach practical natural language processing (NLP) for clinical applications via working through four independent NLP tutorials. Each tutorial is associated with its own Jupyter Notebook.
The chapter uses real de-identified clinical note examples queried from the MIMIC-III dataset. As such, you will need to obtain your own Physionet account and access to use the MIMIC dataset first. Please follow the instructions here to obtain dataset access: https://mimic.physionet.org/gettingstarted/access/.
However, you will not need to setup the MIMIC SQL dataset locally to download the datasets required for this chapter. For each section, the necessary SQL code to query the practice datasets will be given to you to query the datasets yourself via MIMIC's online Query Builder application: https://querybuilder-lcp.mit.edu.
The NLP demonstration exercises in the chapter are run in the Python Jupyter Notebook environment. Please see the Project Jupyter website for the installation instructions (https://jupyter.org/install).

See Jupyter Notebook: Part A-Spotting NASH
The first example is the task of using notes to identify patients for possible inclusion in a cohort. In this case we're going to try to find records of patients with Nonalcoholic Steatohepatitis (NASH). It is difficult to use billing codes (i.e., ICD-9) to identify patients with this condition because it gets confounded with a generic nonalcoholic liver disease ICD-9 code (i.e., 571.8). If you need to explicitly find patients with NASH, doing so requires looking into the text of the clinical notes.
In this example, we would like the system to "find any document where the string "NASH" or "Nonalcoholic Steatohepatitis" appears". Note that in this first filter, we are not going to not worry if the phrase is negated (e.g., "The patient does not have NASH") or if the phrase shows up as a family history mention (e.g., "My mom suffered from NASH"). Negation detection will be dealt with separately in tutorial 3. Since Nash is a family name, however, we will need to worry about "Thomas Nash" or "Russell Nash". In general, any further context interpretation will need to be screened out by a human as a next step or be dealt with by further NLP context interpretation analysis.

Accessing notes data
First, we need access to the data. Go to: https://querybuilder-lcp.mit.edu. Login with the username and password you have obtained from Physionet to access the MIMIC-III database.
Since NASH is one of the causes of liver failure or cirrhosis, for the purpose of this example, we are going to narrow the search by exporting 1000 random notes where "cirrhosis" is mentioned in the notes. In a real example, you might want to apply other clinical restrictions using either the free text or the structured data to help you better target the notes you are interested in analysing.
In the query home console, paste in the following SQL commands and click "Execute Query". After the query finishes running, you should see the tabular results below the console. Now click "Export Results" and pick save as "part_a.csv". Save the file to the directory (i.e., folder) where you are running your local Jupyter notebook from.

Setting up in Jupyter Notebook
Now we can do some NLP exercises in Jupyter notebook with Python. As with any Jupyter script, the first step is simply loading the libraries you will need. "'python # First off -load all the python libraries we are going to need import pandas as pd import numpy as np import random from IPython.core.display import display, HTML "' Then we can import the notes dataset we just exported from Query Builder to the Jupyter Notebook environment by running the following code: "'python filepath = 'replace this with your path to your downloaded .csv file' notes = pd.read_csv(filepath) "' Note, if you already have the MIMIC dataset locally set up, the following code snippet will allow you to query your local MIMIC SQL database from the Jupyter notebook environment.

NLP Exercise: Spotting 'NASH' in clinical notes with brute force
We now need to define the terms we are looking for. For this simple example, we are NOT going to ignore upper and lower letter cases, such that "NASH", "nash", and "Nash" are considered as different terms. In this case, we will focus exclusively on "NASH", so we are less likely to pick up the family name "Nash".
"'python # Here is the list of terms we are going to consider "good" terms = ['NASH', 'nonalcoholic steathohepatitis'] "' This is the code that brute forces through the notes and finds the notes that have an exact phrase match with our target phrases. We'll keep track of the "row_id" for future use.
"'python # Now scan through all of the notes. Do any of the terms appear? If so stash the note # id for future use matches = [] for index, row in notes.iterrows(): if any(x in row['text'] for x in terms): matches.append(row['row_id']) print("Found " + str(len(matches)) + " matching notes.") "' Lastly, we pick one matching note and display it. Note, you can "Ctrl-Enter" this cell again and again to get different samples.

Adding Flexibility in Search with Regular Expressions
While simple word matching is helpful, sometimes it is more useful to utilize more advanced searches. For example, extracting measurements (i.e. matching numbers associated with specific terms, e.g. HR, cm, BMI, etc.) or situations where exact character matching is not desired (e.g. if one would also like to capture plurals or other tenses of a given term). There are many task specific examples like these where regular expressions ("regex") (Kleene 1951) can add flexibility to searching information in documents.
You can think of regular expressions as a set of rules to specify text patterns to programming languages. They are most commonly used for searching strings with a pattern across a large corpus of documents. A search using regular expressions will return all the matches associated with the specified pattern. The notation used to specify a regular expression offers flexibility in the range of patterns one can specify. In fact, in its simplest form, a regular expression search is nothing but an exact match of a sequence of characters in the text of the documents. Such direct term search is something we discussed in the previous example for spotting mentions of NASH.
The specific syntax used to represent regular expressions in each programming language may vary, but the concepts are the same. The first part of this tutorial will introduce you to the concept of regular expressions through a web editor. The second part will use regular expressions in Python to demonstrate the extraction of numerical values from clinical notes.

Regular Expression Rules
Sections 14.3.2.1 and 14.3.2.2 will both be using some of the regular expression rules shown below.
By default, X is just one character, but you can use () to include more than one. For example: • A+ would match A, AA, AAAAA • (AB)+ would match AB, ABAB, ABABABAB

Special Characters
{}[]()ˆ$.|*+ ?\ (and -inside of brackets []) are special and need to be "escaped" with a \ in order to match them (which tells us to ignore the special characteristics and treat it as a normal character).
For example: • Matching . will match any character (as noted in Table 14.1).
• But if you want to match a period, you have to use \ (Table 14.2).

Visualization of Regular Expressions
To best visualize how regular expressions work, we will use a graphical interface. In a web search engine, you can search for "regex tester" to find one. These regular expression testers typically have two input fields: 1. A Test String input box which contains the text we want to extract terms from.

A Regular Expression input box in which we can enter a pattern capturing the terms of interest.
Below is an example.
(1) In the Test String box, paste the following plain text, which contains the names of a few common anti-hypertension blood pressure medicines: "'plain text LISINOpril 40 MG PO Daily captopril 6.25 MG PO TID .*pril This catches 0 or more characters before "pril" [a-z]*pril This catches 0 or more characters, lower case, but does not match spaces or numbers etc.
[abcdefghijklmnopqrstuvwxyz]*pril Notice that everything inside of the bracket is a character that we want to catch; it has the same results as the pattern above (2) In the Regular Expression box, test each one of the patterns in Table 14.3 and observe the difference in items that are highlighted.

See Jupyter Notebook: Part B-Fun with regular expressions
In this tutorial, we are going to use regular expressions to identify measurement concepts in a sample of Echocardiography ("echo") reports in the MIMIC-III database. Echocardiogram is an ultrasound examination of the heart. The associated report contains many clinically useful measurement values, such as blood pressure, heart rate and sizes of various heart structures. Before any code, we should always take a look at a sample of the notes to see what our NLP task looks like: This is a very well-formatted section of text. Let us work with a slightly more complex requirement (i.e., task), where we would like to extract the numerical value of the heart rate of a patient from these echocardiography reports.
A direct search using a lexicon-based approach as with NASH will not work, since numerical values can have a range. Instead, it would be desirable to specify a pattern for what a number looks like. Such pattern specifications are possible with regular expressions, which makes them extremely powerful. A single digit number is denoted by the notation \d and a two-digit number is denoted by \d\d. A search using this regular expression will return all occurrences of two-digit numbers in the corpus.

Accessing notes data
Again, we will need to query and download the Echocardiogram reports dataset from MIMIC's online Query Builder: https://querybuilder-lcp.mit.edu. Once logged in, paste the following SQL query code into the Home console and click "Execute Query".
"'MySQL SELECT row_id, subject_id, hadm_id, text FROM noteevents WHERE CATEGORY = 'Echo' LIMIT 10; "' All clinical notes in MIMIC are contained in the NOTEEVENTS table. The column with the actual text of the report is the TEXT column. Here, we are extracting the TEXT column from the first ten rows of the NOTEEVENTS table.
Click "Export Results" and save the exported file as "part_b.csv" file in the directory (i.e., folder) where you are running your local Jupyter notebook from. If you have the MIMIC-III database installed locally, you could query the dataset from the notebook locally as shown in tutorial "1. Direct search using curated lexicons"; simply replace the relevant SQL code.

Setting up in Jupyter Notebook
First, we import the necessary libraries for Python.
"'python import os import re import pandas as pd "' Next, we import the echo reports dataset to your Jupyter notebook environment: "'python filepath = 'replace this with your path to your downloaded .csv file' first_ten_echo_reports = pd.read_cs(filepath) "' Let us examine the result of our query. We will print out the first 10 rows.
"'python first_ten_echo_reports.head(10) "' Let us dig deeper and view the full content of the first report with the following line.
Arrays start numbering at 0. If you want to print out the second row, you can type: Make sure to rerun the block after you make changes.

NLP Exercise: Extracting heart rate from this note
We imported the regular expressions library earlier (i.e., import re). Remember, the variable "report" was established in the code block above. If you want to look at a different report, you can change the row number and rerun that block followed by this block.
"'python regular_expression_query = r'HR.*' hit = re.search(regular_expression_query,report) if hit: print(hit.group()) else: print('No hit for the regular expression') "' We are able to extract lines of text containing heart rate, which is of interest to us. But we want to be more specific and extract the exact heart rate value (i.e., 85) from this line. Two-digit numbers can be extracted using the expression \d\d. Let us create a regular expression so that we get the first two-digit number following the occurrence of "HR" in the report.
"'python regular_expression_query = r'(HR).*(\d\d)' hit = re.search(regular_expression_query,report) if hit: print(hit.group (0)) print(hit.group(1)) print(hit.group(2)) else: print('No hit for the regular expression') "' The above modification now enables us to extract the desired values of heart rate. Now let us try to run our regular expression on each of the first ten reports and print the result.
The following code uses a "for loop", which means for the first 10 rows in "first_ten_echo_reports", we will run our regular expression. We wrote the number 10 in the loop because we know there are 10 rows.
"'python for i in range (10) We do not get any hits for reports 3 and 4. If we take a closer look, we will see that there was no heart rate recorded for these two reports.
Here is an example for printing out the echo report for 3; we can replace the 3 with 4 to print out the 4th report.

Checking for Negations
See Jupyter Notebook: Part C-Sentence tokenization and negation detection Great! Now you can find terms or patterns with brute force search and with regex, but does the context in which a given term occurred in a sentence or paragraph matter for your clinical task? Does it matter, for example, if the term was affirmed, negated, hypothetical, probable (hedged), or related to another unintended subject? Often times, the answer is yes. (See Coden et al. 2009 for a good discussion on the challenges of negation detection in a real-world clinical problem.) In this section, we will demonstrate negation detection-the most commonly required NLP context interpretation step-by showing how to determine whether "pneumothorax" is reported to be present or not for a patient according to their Chest X-ray (CXR) report. First, we will spot all CXR reports that mention pneumothorax. Then we will show you how to tokenize (separate out) the sentences in the report document with NLTK (Perkins 2010) and determine whether the pneumothorax mention was affirmed or negated with Negex (Chapman et al. 2001).

Accessing notes data
Again, in Query Builder https://querybuilder-lcp.mit.edu (or local SQL database), run the following SQL query. Export 1000 rows and save results as instructed in prior examples and name the exported file as "part_c.csv".

Setting up in Jupyter Notebook
Again, we will first load the required Python libraries and import the CXR reports dataset we just queried and exported from Query Builder.
"'python # Basic required libraries are: import pandas as pd import numpy as np import random import nltk # import dataframe filename = 'replace this with your path to your downloaded .csv file' df_cxr = pd.read_csv(filename) # How many reports do we have? print(len(df_cxr)) "'

NLP Exercise: Is "Pneumothorax" Mentioned?
Next, let's get all the CXR reports that mention pneumothorax.
"'python # First we need to have a list of terms that mean "pneumothorax" -let's call these commonly known pneumothorax variations as our ptx lexicon: ptx = ['pneumothorax', 'ptx', 'pneumothoraces'] # Simple spotter: Spot occurrence of a term in a given lexicon anywhere within a text document or sentence: def spotter(text, lexicon): text = text.lower() # Spot if a document mentions any of the terms in the lexicon # (not worrying about negation detection yet) match = [x in text for x in lexicon] if any(match) == True: mentioned = 1 else: mentioned = 0 return mentioned # Let's now test the spotter function with some simple examples: sent1 = 'Large left apical ptx present.' sent2 = 'Hello world for NLP negation' # Pnemothorax mentioned in text, spotter return 1 (yes) spotter(sent1, ptx) "' "'python # Pneumothorax not mentioned in text, spotter return 0 (no) spotter(sent2, ptx) "' Now, we can loop our simple spotter through all the "reports" and output all report IDs (i.e., row_id) that mention pneumothorax.

NLP Exercise: Improving Spotting of a Concept in Clinical Notes
Unfortunately, medical text is notorious for misspellings and numerous nonstandardized ways of describing the same concept. In fact, even for pneumothorax, there are many additional ways it could "appear" as a unique string of characters to a computer in free text notes. It is a widely recognized NLP problem that one set of vocabularies (lexicons) that work well on one source of clinical notes (e.g., from one particular Electronic Medical Record (EMR)) may not work well on another set of notes (Talby 2019). Therefore, a huge part of being able to recognize any medical concept with high sensitivity and specificity from notes is to have a robust, expert-validated vocabulary for it.
There are a few unsupervised NLP tools or techniques that can help with curating vocabularies directly from the corpus of clinical notes that you are interested in working with. They work by predicting new "candidate terms" that occur in similar contexts as a few starting "seed terms" given by a domain expert, who then has to decide if the candidate terms are useful for the task or not.
There also exist off-the-shelf, general-purposed biomedical dictionaries of terms, such as the UMLS (Bodenreider 2004) or the SNOMED_CT (Donnelly 2006). However, they often contain noisy vocabularies and may not work as well as you would like on the particular free text medical corpus you want to apply the vocabulary to. Nevertheless, they might still be useful to kickstart the vocabulary curation process if you are interested in extracting many different medical concepts and willing to manually clean up the noisy terms.
Word2vec is likely the most basic NLP technique that can predict terms that occur in similar neighboring contexts. More sophisticated tools, such as the "Domain Learning Assistant" tool first published by Coden et al. (2012), integrate a user interface that allows more efficient ways of displaying and adjudicating candidate terms. Using this tool, which also uses other unsupervised NLP algorithms that perform better at capturing longer candidate phrases and abbreviations, a clinician is able to curate the following variations for pneumothorax in less than 5 minutes.

Pause for thought
Now we can spot mentions of relevant terms, but there are still some other edge cases you should think about when matching terms in free text: 1. Are spaces before and/or after a term important? Could they alter the meaning of the spot? (e.g. should [pneumothorax] and hydro[pneumothorax] be treated the same?) 2. Is punctuation before and/or after a term going to matter? 3. Do upper or lower cases matter for a valid match? (The above simple spotter turns all input text into lower letter case so in effect ignores letter cases when searching for a match.) What could you do to handle edge cases?
1. Use regular expression when spotting the terms. You can pick what characters are allowed on either ends of a valid matched term, as well as upper or lower letter cases. 2. Add some common acceptable character variations, such as punctuation or spaces on either end for each term in the lexicon (e.g., "ptx/").

NLP Exercise: Negation Detection at Its Simplest
Obviously, not all these reports that mention pneumothorax signify that the patients have the condition. Often times, if a term is negated, then it occurs in the same sentence as some negation indication words, such as "no", "not", etc. Negation at its simplest would be to detect such co-occurrence in the same sentence.
"'python # e.g. Pneumothorax mentioned in text but negated, a simple spotter would still return 1 (yes) sent3 = 'Pneumothorax has resolved.' spotter(sent3, ptx) "'python # e.g. Simply spotting negation words in the same sentence: ','never','not','removed', 'ruled out', 'resolved'] spotter(sent3, neg) "' However, there would be other edge cases. For example, what if "no" is followed by a "but" in a sentence? e.g. "There is no tension, but the pneumothorax is still present." Luckily, smarter NLP folks have already written some negation libraries to spot negated mentions of terms for us that work on these more complicated cases. However, first, we will need to learn how to pre-process the input text document into sentences (i.e. sentence tokenization).

NLP Exercise: Sentence Tokenization with NLTK
Splitting up the sentence before running negation is usually required with most negation libraries. Here is a link to instructions for installing NLTK: https://www. nltk.org/install.html.
"'python # Lets print a random report from df_cxr report = df_cxr.text[random.randint(0,100)] print(report) "' There are two main ways to tokenize sentences with NLTK. If you do not need to save the sentence offsets (i.e., where the sentence started and ended in the original report), then you can just use "sent_tokenize".

NLP Exercise: Using an Open-Source Python Library for Negation-Negex
Next, let us finally introduce "Negex", an open source Python tool for detecting negation. It has limitations, but it would be easier to build and improve on top of it than to write something from scratch. You can download negex.python from: https:// github.com/mongoose54/negex/tree/master/negex.python.

Some observations
You can see that even Negex is not perfect at its single sentence level prediction. Here, it does not pick up hypothetical mentions of pneumothorax; it interpreted "r/o ptx" as affirmed. However, at the whole report level, later sentences might give a more correct negated prediction.

See Jupyter Notebook: Part D-Obesity challenge
Let's consider a quick real-world challenge to test what we have learned. Unlike many medical concepts, obesity is one that has a fairly well-established definition. It may not be always correct (Ahima and Lazar 2013), but the definition is clear and objective: If a patient's BMI is above 30.0, they are considered obese.
However, it is worthwhile to be aware that many other clinical attributes in medical notes that are not as clear cut. For example, consider the i2b2 challenge on smoking detection (I2B2 2006). How does one define "is smoker"? Is a patient in a hospital who quit smoking three days ago on admission considered a non-smoker? What about a patient in primary care clinic who quit smoking a few weeks ago? Similarly, how does one define "has back pain", "has, non-adherence", and so on? In all of these cases, the notes may prove to be the best source of information to determine the cohort inclusion criteria for the particular clinical study. The NLP techniques you have learned in this chapter should go a long way to help to structure the "qualitative" information in the notes into quantitative tabular data.
The goal of the obesity challenge is to see how accurately you can identify patients who are obese from their clinical notes. In the interest of an easy-to-compute gold standard for our test (i.e. instead of manually annotating a gold standard data for e.g. "has back pain" ourselves), we picked "obesity" so that we can just calculate the patient's BMI from the height and weight information in MIMIC's structured data.
For the Obesity Challenge exercise: 1. We will generate a list of 50 patients who are obese and 50 who are not.
2. Then, we are going to pull all the notes for those patients.
3. Using the notes, you need to figure out which patients are obese or not. 4. At the end, the results will be compared with the gold standard to see how well you did.

Accessing notes data
The SQL query for this exercise is fairly long so it is saved in a separate text file called "part_d_query.txt" in this chapter's Github repository. Copy the SQL command from the text file, then paste and run the command in Query Builder (https://querybuilder-lcp.mit.edu). Rename the downloaded file as "obese-gold.csv". Make sure the file is saved in the same directory as the following notebook.

Setting up in Jupyter Notebook
As usual, we start with loading the libraries and dataset we need: "'python # First off -load all the python libraries we are going to need import pandas as pd import numpy as np "' "'python notes_filename = 'replace this with your path to your downloaded .csv file' obesity_challenge = pd.read_csv(notes_filename) "' The "obesity_challenge" dataframe has one column, "obese", that defines patients who are obese (1) or normal (0). The definition of obese is BMI ≥ 30, overweight is BMI ≥ 25 and < 30, and normal is BMI ≥ 18.5 and < 25. We will create the notes and the gold standard data frames by subsetting "obesity_challenge".
NLP Exercise: Trivial term spotting as baseline For this exercise, we are going to begin with trivial term spotting (which you have encountered in NLP exercise Part A) with only one obesity-related term at baseline. You, however, are going to work on editing and writing more complex, interesting and effective NLP code! "'python # Here is the list of terms we are going to consider "good" or associated with what we want to find, obesity. terms = ['obese'] "' Using the trivial term spotting approach, we're going to quickly scan through our note subset and find people where the obesity-related term(s) appears.
"'python # Now scan through all of the notes. Do any of the terms appear? If so stash the note # id for future use matches = [] for index, row in notes.iterrows(): if any(x in row['text'] for x in terms): matches.append(row['subject_id']) print("Found " + str(len(matches)) + " matching notes.") "' We will assume all patients are initially "unknown" and then for each of the true matches, we'll flag them. Note: We are using 1 for obese, 0 for unknown and −1 for not-obese. For our code at baseline, we have not implemented any code that sets a note to −1, which can be the first improvement that you make.

NLP Exercise: can you do better?
We got a score of 19 (out of a possible 100) at baseline. Can you do better?
Here are a few NLP ideas that can improve the score: • Develop a better lexicon that captures the various ways in which obesity can be mentioned. For example, abbreviations are often used in clinical notes. • Checking whether the mentioned term(s) for obesity is further invalidated or not.
For example, if "obese" is mentioned in "past", "negated", "family history" or other clinical contexts. • Use other related information from the notes, e.g. extract height and weight values with regular expressions and compute the patient's BMI or directly extract the BMI value from the notes. • Tweak the regular expressions to make sure additional cases of how terms can be mentioned in text are covered (e.g. plurals, past tenses (if they do not change the meaning of the match)).

Summary Points
1. Spotting a "name-entity" is as simple as writing code to do a search-and-find in raw text.
fluid" could be interpreted as non-compliant as the patient knowingly skipped doses (and subsequently was admitted to the ICU with diabetic ketoacidosis, a complication due to not getting insulin). Or, this sentence could be judged to be compliant as the patient "tried". Such judgement calls are beyond the scope of any computer and depend on what the information is going to be used for in downstream analytics.

Conclusion
We provide an introduction to NLP basics in the above chapter. That being said, NLP is a field that has been actively researched for over half a century, and for well written notes, there are many options for code or libraries that can be used to identify and extract information.
A comprehensive overview of approaches used in every aspect of natural language processing can be found in Jurafsky and Martin (2009). Information extraction, including named-entity recognition and relation extraction from text, is one of the most-studied areas in NLP (Meystre et al. 2008), and the most recent work is often showcased in SemEval tasks (e.g., SemEval 2018).
For a focus on clinical decision support, Demner-Fushman et al. (2009) provides a broad discussion. Deep learning is an increasingly popular approach for extraction, and its application to electronic health records is addressed in Shickel et al. (2017).
Nonetheless, the basics outlined in this chapter can get you quite far. The text of medical notes gives you an opportunity to do more interesting data analytics and gain access to additional information. NLP techniques can help you systematically transform the qualitative unstructured textual descriptions into quantitative attributes for your medical analysis.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.