Adversarial attacks on fingerprint liveness detection
- 34 Downloads
Deep neural networks are vulnerable to adversarial samples, posing potential threats to the applications deployed with deep learning models in practical conditions. A typical example is the fingerprint liveness detection module in fingerprint authentication systems. Inspired by great progress of deep learning, deep networks-based fingerprint liveness detection algorithms spring up and dominate the field. Thus, we investigate the feasibility of deceiving state-of-the-art deep networks-based fingerprint liveness detection schemes by leveraging this property in this paper. Extensive evaluations are made with three existing adversarial methods: FGSM, MI-FGSM, and Deepfool. We also proposed an adversarial attack method that enhances the robustness of adversarial fingerprint images to various transformations like rotations and flip. We demonstrate these outstanding schemes are likely to classify fake fingerprints as live fingerprints by adding tiny perturbations, even without internal details of their used model. The experimental results reveal a big loophole and threats for these schemes from a view of security, and enough attention is urgently needed to be paid on anti-adversarial not only in fingerprint liveness detection but also in all deep learning applications.
KeywordsDeep learning Fingerprint liveness detection Adversarial attacks
An original clean image
An adversarial sample
The label of the original clean image
The target label
Loss function of the classifier
Parameter of the classifier
The size of perturbation ε
There are serval sorting criterions of adversarial attacks concerning the level that attackers are in the know of target models or whether the misclassified label is specified. Generating adversarial samples with the architecture and parameters of the target model is referred to as white-box attack while black-box attack without them. For an image, if not only the attack is required to be successful, but also the adversarial sample generated is required to classified to a specific class, it is called targeted attack and otherwise untargeted attack. Generating adversarial samples is a constrained optimization problem. Given a clean image and a fixed classifier that originally makes correct classification, our goal is to make the classifier misclassify the clean image. Note that the prediction results can be regarded as a function of the clean image about the classifier of which the parameters are fixed. Thus, general adversarial attack methods computing gradients of the clean image about the classifier to make the prediction deviate from the original result, and modify the clean image accordingly.
Since Szegedy et al.  explored this property, and with many efficient, robust attack methods being crafted continuously, a potential security threat for practical deep learning applications came into view. For instance, face recognition systems using CNNs also show vulnerability against adversarial samples [17, 18, 19]. Such biometric information is always used with sensitive purposes or scenarios requiring high security, especially fingerprint due to its uniqueness varies individuals. Considering this, we extend similar work on another application referred to as fingerprint liveness detection in this paper, notice that we are the first introducing adversarial attacks into this area to our knowledge. The fingerprint liveness detection module is always deployed in fingerprint authentication systems. This technology aims to distinguish whether the fingerprint is an alive part of a person or a fake one forged with silicone, etc. It is in general divided into hardware- and software-based approaches depending on whether additional sensors are required. The latter can be easily developed into most systems therefore received more attention, and it can be further classified as feature- and deep learning-based. Among them, deep learning-based solutions caused a rising interest in recent years thanks to the rising of deep learning. Although they reached much more outstanding performance than feature-based solutions, the vulnerable property of CNNs leaves a potential risk. That is, the correctly classified fake fingerprint can pass through the detection module by presenting its adversarial sample. Even though attackers cannot successfully cheat fingerprint recognition system with fake fingerprint, they may still against the system by supplying an adversarial fingerprint image. In this paper, we thoroughly evaluate the robustness of several state-of-the-art fingerprint liveness detection models by both white-box and black-box attacks in various settings and demonstrate the vulnerability of these models in this setting.
In our paper, we successfully attack deep learning-based fingerprint liveness detection methods, including the-state-of-the-art one by adversarial attack technology. Sufficient experiments show that once these methods are open source, for almost any fingerprint, the malicious can make its adversarial sample to pose as an alive one and cheat the detection algorithms. Our work also shows even if the details of these detection algorithms are unknown, there is still a definite possibility to realize this attack. We also propose an enhanced adversarial attack method to generate adversarial samples that are more robust to various transformations and achieve a higher attack success rate compared to other advanced methods.
2 Related work
In this section, we will review the development of adversarial attack methods and deep learning-based fingerprint liveness detection models. On the basis of current knowledge, deep neural networks achieve high performance on tasks in computer vision and natural language processing because they can characterize arbitrary continuous function with an incalculable number of cascaded nonlinear steps. But as the result is automatically computed by backpropagation via supervised learning, it can be difficult to interpret and can have counterintuitive properties. And with deep neural networks’ increasing usage in the physical world, these properties may be used for malicious behavior.
As shown in Fig. 1a, by solving this optimization problem, we could compute the perturbations to which a clean image that could successfully fool a model should be added, but the adversarial images and original images are hardly distinguishable to human. It was also observed that a considerable number of adversarial examples will be misclassified by different networks as well, namely, cross model generalization. These astonishing discoveries aroused strong interest of researchers in adversarial attacks of computer vision and gave birth to related competitions [21, 22].
where ∇J(…) computes the gradient of the cost function around parameters of the model w.r.t. Xc and ε notes a small coefficient that restricts the infinite norm of the perturbation. They successfully caused a misclassification rate of 99.9% on a shallow softmax classifier trained on MNIST while ε = 0.25 and 87.15% on a convolutional maxout network trained on CIFAR-10 while ε = 0.1. Miyato et al.  then normalized the computed perturbation with L2-norm on this basis. FGSM and its varietas are classic one-shot method that generates an adversarial sample with one step only. Later in 2017, Kurakin et al.  developed an iterative method that takes multiple steps increasing the loss function namely Basic Iterative Method (BIM). Their approach exceedingly reduces the size of perturbation for generating an adversarial sample and shows a serious threat to deep architecture models such as Inception-v3 . Similarly, Moosavi-Dezfooli et al.  proposed Deepfool that also computes the minimum perturbation iteratively. This algorithm disturbs the image with a small vector, pushing the clean image confined in the decision boundary out of the boundary step by step until the misclassification occurs. Dong et al.  introduced momentum into FGSM, in their approach, not only the current gradient is computed during every iteration but also the gradient of the last iteration is added, and a decay factor is used to control the influence of the previous gradient. This Momentum Iterative Method (MIM) greatly improves cross model generalization and black-box attack success rate, their team won the first prize in NIPS 2017 Non-targeted Adversarial Attack and Targeted Adversarial Attack competitions . The above methods all compute the perturbation by solving a gradient related problem, usually requiring direct access to target models. To realize a more robust black-box attack, Su et al.  proposed One Pixel Attack that searches the perturbation by differential evolution that causes misclassification with the highest confidence instead of computing the gradient. This method made no restraint of perturbation size, meanwhile, it limits the number of perturbed pixels.
With the development of adversarial attack technology, some scholars began to conduct research on attacking real-world systems embedded with deep learning algorithms. Kurakin et al.  first proved that the threat of adversarial attack also exists in the real world. They printed adversarial images and took snapshots from smartphones. Results show that even through captured by camera, a relatively large part of adversarial images are misclassified as well. Kevin et al.  designed Robust Physical Perturbations (RP2) which only perturbs the target objects in physical world such as guideposts and keeps the background unchanged. For instance, sticking several black and white stickers on a stop sign according to RP2’s result could prevent YOLO and Faster-RCNN from detecting it correctly. Bose et al.  also successfully attacks Faster-RCNN with adversarial examples that crafted from their proposed adversarial generator network by solving a constrained optimization problem.
In addition to face location, another key problem in face recognition is liveness detection. Biometrics like faces are usually applied in systems with high-security requirements, thus the systems are always accompanied by liveness detection module to detect whether a captured face image is alive or from photos. We note that fingerprint identification systems also require liveness detection to distinguish live fingers from fake ones , and with more and more fingerprint liveness detection algorithms based on deep learning are developed, the adversarial attack has risen a potential risk in this domain as well. To our knowledge, Nogueira et al.  first detected fake fingerprint using CNNs, later in , they fine-tuned the fully connected layer of VGG and Alexnet with fingerprint datasets, leaving previous convolutional and pooling layers unchanged. This work has reached astonishing performance compared to feature-based approaches in fingerprint liveness detection. Chugh et al.  cut fingerprint patches centered on pre-extracted minutiaes and trained them with Mobilenet-v1. Their results are state-of-the-art as we got on with this work. In the literature, Kim et al.  proposed a detection algorithm based on deep belief network (DBN) that is constructed layer by layer using restricted Boltzmann machines (RBM). Nguyen et al.  regarded the fingerprint as a global texture feature and designed an end-to-end model following this idea. Their experimental results show that networks designed to combine the inherent characteristics of fingerprints can achieve better performance. Pala et al.  constructed a triple dataset to train their network. A triple set consists of a fingerprint to be detected, a fingerprint of the same class as it and a fingerprint of the other class. This data structure could make a constraint to minimize within-class distance and maximize between-class class distance is as large as possible. It is noteworthy that all these methods mentioned are based on CNN, and achieved very competitive performances.
3.1 Networks to be attacked
3.1.1 VGG19 and Alexnet-based method
In this section, we will briefly introduce the target networks we attempt to attack, including specific structure and training processes. Before we conduct adversarial attacks on the state-of-the-art fingerprint liveness detection networks, a pre-evaluation would be carried on , the finetuned VGG and Alexnet. This is because the way that finetuning classical models for new tasks is widely used, though these models are a bit out of date, they stood the test of time and from which more advanced models derive. Equally thorough experiments will also be carried on . According to Nogueira’s method in , both models are finetuned with stochastic gradient descent (SGD) while batch size is 5, momentum  is 0.9, and the learning rate is fixed at 1E−6.
3.1.2 Mobilenet-v1-based method
3.2 Methods to generate samples
In this paper, we totally compared four algorithms regarding success rate, visual impact, and robust to transformations. FGSM is the first basic adversarial algorithm we tested using the function (3), and its effectiveness is evaluated by adjusting ε. MI-FGSM is an upgraded version of FGSM that used in this paper, the number of iterations T and momentum degree μ is two other hyperparameters to be controlled instead of ε. We then made another evaluation with Deepfool and tested our own modified method based on MI-FGSM. The Deepfool automatically computes the minimum perturbations without setting up a fixed ε. Since it has been shown that iterative methods are stronger white-box adversaries than one-step methods at the cost of worse transferability, our method can keep the transferability to a certain extent.
The algorithm terminates when xi changes sign of the classifier’s result or maximum iterations is reached. The Deepfool algorithm for binary classifiers is summarized as follows.
3.2.2 Momentum iterative fast gradient sign method
Momentum iterative fast gradient sign method (MI-FGSM) is upgraded twice in the basic version of FGSM. The I-FGSM iteratively applies multiple steps with a small step size α, and MI-FGSM further introduces momentum . Momentum method is a technique to accelerate and stabilize stochastic gradient descent algorithm. Gradients in the previous iteration are accumulated in the current gradient direction of the loss function, it can be considered a velocity vector pass through every iteration. Dong et al. first applied the technique of momentum to generate adversarial samples and get tremendous benefits. The MI-FGSM is summarized below.
3.2.3 Transformation robust attack
During the experiments, we found that adversarial samples generated by these methods are not robust enough to image transformations, for instance, resize, horizontal flip, and rotations. However, these transformations commonly occur in the physical world, and to generate adversarial samples that can successfully attack detection modules under any conditions, we have to take such demand into account. A heuristic and natural idea is to add slight Gaussian noise in order to disturb the sample at every iteration. And by randomly rotating the sample at a very small angle, we could improve its robustness to rotation transformations and even transferability on a different model. Note that with the addition of the noise, the global perturbation degree is increased compared to the original MI-FGSM.
4 Results and discussion
In this section, we conduct different adversarial attacks on the above models. Details are available in the following part. In general, we compare the success rates of different attack methods to different models, and furthermore, evaluate their robustness to varies transformations such as rotating and resizing.
Summary of liveness detection datasets used in our work
315 × 372
640 × 480
800 × 750
500 × 500
1000 × 1000
640 × 480
Live images train/test
Spoof images train/test
We adjust ε in FGSM to control the perturbation degree, five different values: 0.03, 0.06, 0.09, 0.12, and 0.15 are tested on all three detection algorithms trained on LivDet2013 in a white-box manner. Since Deepfool automatically searches the minimum perturbation, it does not restrict the perturbation degree, however, we limit the number of max iterations as 100 to guarantee time consumption acceptable, also, 100 is a moderate value that ensures most fingerprint images can be converted to their adversarial samples. As for MI-FGSM, we set the ε = 0.12, iterations = 10, and decay factor = 0.5 according to the existing literature and our preliminary test results. Our method originates in MI-FGSM, thus we apply similar settings but iterations number raised to 20. The noise added obeys gauss distribution of which the standard deviation is 0.1 and the mean is 0. Meanwhile, we set the angle of random rotation between − 5° and 5°.
To evaluate the feasibility of black-box attack, we have trained our own detection models. We first consider two models of which one is shallow with several layers and the other is much deeper. The shallow one consists of 4 convolutional layers with 3×3 kernel, and the stride is 2 thus no pooling layer is involved. Each layer is twice as deep as the previous one while there are 32 channels in first layer. The deeper one consists of 5 blocks in which there are 3 convolutional layers and BN layers, numbers of kernels in block are doubled to previous layer and consistent inside the block as it is 32 in the first one. We further train two ensemble models with three branches for each in addition to the models used above: one is shallow and the other is relatively deep as well. Each branch is different to each other regarding size of kernel, number of kernels and pooling methods. This idea is originated in inception module. The reason we set up the black-box attack models to be ensemble is that successful attacks on a collection of models may cause the improvement on attacking single model. This is a natural intuition and has been verified in our work. Specific structures of the above four models are different for different dataset and chosen via an extensive search. At last, we prepared five kinds of transformations to research their influence. Resizing represents that we expand the adversarial sample by 2X and restore it to original size, this approximately equal to adding very small noise according to scaling method. We also horizontally flip and rotate the samples at random angle − 30° to 30°, combination of resize and flip and resize and rotation are also considered.
Success rate of FGSM attacks with different ε in white-box manner. Bio2013, Ita2013, and Cro2013 represent dataset of Biometrika, ItalData, and Crossmatch in LiveDet2013, respectively
Model and dataset
ε = 0.03 (%)
ε = 0.06 (%)
ε = 0.09 (%)
ε = 0.12 (%)
ε = 0.16 (%)
Success rate of different methods in white-box manner. Gre2015, Bio2015, and Cro2015 represent dataset of GreenBit, Biometrika, and CrossMatch in LiveDet2015 respectively
Model and dataset
Average robustness computed for different methods. For each model, we randomly pick 200 samples from GreenBit, Biometrika, and CrossMatch in LivDet2015, respectively, and compute their average robustness
3.9 × 10−2
3.4 × 10−2
4.7 × 10−2
FGSM(ε = 0.12)
8.7 × 10−2
7.4 × 10−2
9.1 × 10−2
4.0 × 10−2
3.7 × 10−2
4.4 × 10−2
4.3 × 10−2
4.2 × 10−2
4.6 × 10−2
Error rate of different models on Biometrika2013 and Biometrika2015
Black-box attacks with adversarial samples generated from different models by MI-FGSM and TRA
VGG19 Bio2013 (%)
Alexnet Bio2013 (%)
MobilenetV1 Bio2013 (%)
VGG19 Bio2015 (%)
Alexnet Bio2015 (%)
Robustness to transformations of different adversarial attack methods, we randomly pick 300 samples from GreenBit, Biometrika, and CrossMatch in LivDet2015, respectively, and generate their corresponding adversarial samples to attack VGG19
Methods and models
Resize and flip
Resize and rotation
In this work, we provided extensive experimental evidence that cheating excellent deep learning-based fingerprint liveness detection schemes by adversarial samples is feasible. These detection networks could be easily break through by basic FGSM in white-box manner at the cost of some perturbations. With more advanced methods like Deepfool and MI-FGSM, almost arbitrary fingerprint image can turn into an adversarial sample with more imperceptible changes. We note that adversarial samples generated by the above methods are not robust enough to transformations, for instance, resize, horizontal flip, and rotations. Thus, we also proposed an algorithm to generate adversarial samples that are slightly more robust to various transformations by adding noise and random rotations during every iteration. These methods are evaluated on LivDet2013 and LivDet2015 datasets. According to our results, a small part of adversarial samples possesses transferability on different models, that indicate it is also possible to cause misclassification under black-box scenarios. In terms of robustness to transformations, further evaluations demonstrate the proposed method can also surpass others slightly. These results highlight the potential risks of existing fingerprint liveness detection algorithms, and we hope our work will encourage researchers to start designing more robust detection algorithms that have innate adversarial robustness to achieve higher security.
This work is supported in part by the Jiangsu Basic Research Programs-Natural Science Foundation under grant numbers BK20181407, in part by the National Natural Science Foundation of China under grant numbers 61672294, in part by Six peak talent project of Jiangsu Province (R2016L13), Qing Lan Project of Jiangsu Province and “333” project of Jiangsu Province, in part by the National Natural Science Foundation of China under grant numbers U1836208, 61502242, 61702276, U1536206, 61772283, 61602253, 61601236, and 61572258, in part by National Key R&D Program of China under grant 2018YFB1003205, in part by NRF-2016R1D1A1B03933294, in part by the Jiangsu Basic Research Programs-Natural Science Foundation under grant numbers BK20150925 and BK20151530, in part by Humanity and Social Science Youth Foundation of Ministry of Education of China (15YJC870021), in part by the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD) fund, in part by the Collaborative Innovation Center of Atmospheric Environment and Equipment Technology (CICAEET) fund, China. Zhihua Xia is supported by BK21+ program from the Ministry of Education of Korea.
JF and ZX collectively designed the research, performed the research and wrote the paper.
PY partly performed the research, analyzed the data, and partly wrote the paper.
FX partly designed the research, wrote the paper, and modified the paper. All authors read and approved the final manuscript.
This work is funded by the National Natural Science Foundation of China under grant numbers 61672294.
Ethics approval and consent to participate
Consent for publication
The authors declare that they have no competing interests.
- 1.Y. Zheng, X. Xu, L. Qi, Deep CNN-assisted personalized recommendation over big data for mobile wireless networks. Wireless Communications and Mobile Computing 2019 (2019)Google Scholar
- 2.Y. Zheng, J. Zhu, W. Fang, L.-H. Chi, Deep learning hash for wireless multimedia image content security. Security and Communication Networks 2018 (2018)Google Scholar
- 3.H. Wang et al., in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Cosface: large margin cosine loss for deep face recognition (2018), pp. 5265–5274Google Scholar
- 4.K. Cao, Y. Rong, C. Li, X. Tang, C. Change Loy, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Pose-robust face recognition via deep residual equivariant mapping (2018), pp. 5187–5196Google Scholar
- 5.Y. Sun, D. Liang, X. Wang, X. Tang, Deepid3: Face recognition with very deep neural networks. arXiv preprint arXiv 1502, 00873 (2015)Google Scholar
- 6.Z. Zheng, X. Yang, Z. Yu, L. Zheng, Y. Yang, J. Kautz, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Joint discriminative and generative learning for person re-identification (2019), pp. 2138–2147Google Scholar
- 7.Y. Li, C. Huang, C.C. Loy, X. Tang, in European Conference on Computer Vision. Human attribute recognition by deep hierarchical contexts (Springer, 2016), pp. 684–700Google Scholar
- 8.P. Li, X. Chen, S. Shen, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Stereo r-cnn based 3d object detection for autonomous driving (2019), pp. 7644–7652Google Scholar
- 9.F. Codevilla, M. Miiller, A. López, V. Koltun, A. Dosovitskiy, in 2018 IEEE International Conference on Robotics and Automation (ICRA). End-to-end driving via conditional imitation learning (IEEE, 2018), pp. 1–9Google Scholar
- 10.C. Szegedy et al., Intriguing properties of neural networks. arXiv preprint arXiv 1312, 6199 (2013)Google Scholar
- 11.A. Krizhevsky, I. Sutskever, G.E. Hinton, in Advances in neural information processing systems. Imagenet classification with deep convolutional neural networks (2012), pp. 1097–1105Google Scholar
- 12.Z. Xia, L. Jiang, D. Liu, L. Lu, B. Jeon, BOEW: a content-based image retrieval scheme using bag-of-encrypted-words in cloud computing. IEEE Transactions on Services Computing (2019)Google Scholar
- 14.Z. Xia, L. Jiang, X. Ma, W. Yang, P. Ji, N. Xiong, A privacy-preserving outsourcing scheme for image local binary pattern in secure industrial internet of things. IEEE Transactions on Industrial Informatics (2019)Google Scholar
- 17.M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter, Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition, in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security 2016, pp. 1528-1540: ACM.Google Scholar
- 18.M. Sharif, S. Bhagavatula, L. Bauer, M.K. Reiter, Adversarial generative nets: Neural network attacks on state-of-the-art face recognition. arXiv preprint arXiv 1801, 00349 (2017)Google Scholar
- 19.G. Goswami, N. Ratha, A. Agarwal, R. Singh, M. Vatsa, in Thirty-Second AAAI Conference on Artificial Intelligence. Unravelling robustness of deep learning based face recognition against adversarial attacks (2018)Google Scholar
- 20.H. Tang, X. Qin, Practical methods of optimization (Dalian University of Technology Press, Dalian, 2004), pp. 138–149Google Scholar
- 21.A. Kurakin et al., in The NIPS'17 Competition: Building Intelligent Systems. Adversarial attacks and defences competition (Springer, 2018), pp. 195–231Google Scholar
- 22.W. Brendel et al., Adversarial vision challenge. arXiv preprint arXiv 1808, 01976 (2018)Google Scholar
- 23.I.J. Goodfellow, J. Shlens, C. Szegedy, Explaining and harnessing adversarial examples. arXiv preprint arXiv 1412, 6572 (2014)Google Scholar
- 25.A. Kurakin, I. Goodfellow, S. Bengio, Adversarial examples in the physical world. arXiv preprint arXiv 1607, 02533 (2016)Google Scholar
- 26.C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, Z. Wojna, in Proceedings of the IEEE conference on computer vision and pattern recognition. Rethinking the inception architecture for computer vision (2016), pp. 2818–2826Google Scholar
- 27.S.-M. Moosavi-Dezfooli, A. Fawzi, P. Frossard, in Proceedings of the IEEE conference on computer vision and pattern recognition. Deepfool: a simple and accurate method to fool deep neural networks (2016), pp. 2574–2582Google Scholar
- 28.Y. Dong et al., in Proceedings of the IEEE conference on computer vision and pattern recognition. Boosting adversarial attacks with momentum (2018), pp. 9185–9193Google Scholar
- 29.J. Su, D.V. Vargas, K. Sakurai, One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation (2019)Google Scholar
- 30.K. Eykholt et al., Robust physical-world attacks on deep learning models, 2018.Google Scholar
- 32.Z. Xia, C. Yuan, R. Lv, X. Sun, N.N. Xiong, Y.-Q. Shi, A novel weber local binary descriptor for fingerprint liveness detection, IEEE Transactions on Systems, Man, and Cybernetics: Systems (2018)Google Scholar
- 33.R.F. Nogueira, R. de Alencar Lotufo, R.C. Machado, in 2014 IEEE workshop on biometric measurements and systems for security and medical applications (BIOMS) Proceedings. Evaluating software-based fingerprint liveness detection using convolutional networks and local binary patterns (IEEE, 2014), pp. 22–29Google Scholar
- 38.F. Pala, B. Bhanu, in Deep Learning for Biometrics. Deep triplet embedding representations for liveness detection (Springer, 2017), pp. 287–307Google Scholar
- 39.I. Sutskever, J. Martens, G. Dahl, G. Hinton, in International conference on machine learning. On the importance of initialization and momentum in deep learning (2013), pp. 1139–1147Google Scholar
- 40.C. Kai, E. Liul, L. Pangi, J. Liangi, T. Jie, in International Joint Conference on Biometrics. Fingerprint matching by incorporating minutiae discriminability (2011)Google Scholar
- 42.L. Ghiani et al., in Iapr International Conference on Biometrics. LivDet 2013 Fingerprint Liveness Detection Competition 2013 (2013)Google Scholar
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.