Implantable Smart Technologies (IST): Defining the ‘Sting’ in Data and Device
- 4.1k Downloads
In a world surrounded by smart objects from sensors to automated medical devices, the ubiquity of ‘smart’ seems matched only by its lack of clarity. In this article, we use our discussions with expert stakeholders working in areas of implantable medical devices such as cochlear implants, implantable cardiac defibrillators, deep brain stimulators and in vivo biosensors to interrogate the difference facets of smart in ‘implantable smart technologies’, considering also whether regulation needs to respond to the autonomy that such artefacts carry within them. We discover that when smart technology is deconstructed it is a slippery and multi-layered concept. A device’s ability to sense and transmit data and automate medicine can be associated with the ‘sting’ of autonomy being disassociated from human control as well as affecting individual, group, and social environments.
KeywordsHealth Biotechnologies Medical devices Smart Autonomy Vulnerability Regulation
In recent decades, and beginning with the ‘information technology revolution’, multiple so-called technology revolutions have occurred, with the biosciences [6, 10, 23, 30, 45], the nanosciences, and the computing sciences all implicated . The ‘bioscience revolution’ relies on a collection of technologies, techniques and practices that implicate human physiology. Molecular biology (which draws on biochemistry and genomics) and synthetic biology (which designs and constructs artificial biological systems having reference to engineering and computational disciplines), are particularly relevant. While, the ‘revolutionary’ nature of these technological movements can be questioned, their co-evolution and convergence cannot. One of the drivers for this phenomenon is health. As people’s expectations of functionality and ambitions for self-actualisation grow, health grows in significance as a phenomenon of socio-political, economic and scientific concern. In short, mounting and diversifying healthcare pressures serve as critical landscape-shapers for these technology fields [21, 26, 39], making function-restoring and function-replacing technologies increasingly important.
[M]an is neither the oldest nor the most constant problem that has been posed for human knowledge. … As the archaeology of our thought easily shows, man is an invention of recent date. And one perhaps nearing its end. If … some event of which we can at the moment do no more than sense the possibility—without knowing either what its form will be or what it promises—were to cause them to crumble, as the ground of Classical thought did, at the end of the eighteenth century, then one can certainly wager that man would be erased, like a face drawn in sand at the edge of the sea [11: pp 386–387].
…the substitution in question has to be visible: if it is to exert its fascination without creating insecurity, the robot must unequivocally reveal its nature as a mechanical prosthesis (its body is metallic, its gestures are discrete, jerky and unhuman [1: pp 129].
In essence, we are concerned with that slow-developing ‘event’ to which Foucault alludes, an event which might be seen as rooted in the here-and-now and unfolding slowly before our eyes; the decentralisation of the human in contemporary thought. Foucault’s erasure of the human can also be interpreted as referring to a posthuman, or rather to techno-human hybridity resulting in a new mode of being.
In the 1960s, the techno-human was envisaged as having cybernetic mechanisms, made necessary by the demands of future space travel. Authors predicted that space travellers would need a closed-loop feedback mechanism to regulate body responses in a hostile environment, and they offered the term ‘cyborg’ for that entity . Implantable smart technologies included osmotic pumps to deliver drugs and electrical stimulation of both the heart and brain during space travel. No unpleasant after-effects in terms of loss of control is considered as the hypothetical space traveller is unconscious. Thus the increasing reliance and dependence on smart technologies was not seen to threaten an individual’s ability to control the effect (multiple) of internal auto-biotechnologies may have on them. That is why, importantly, we are not necessarily here interested in implantable technologies of the ‘carpentry kind’ such as hip or knee joints, or other static prosthetics, but rather with those unobtrusive technologies that might be considered ‘smart.’ It is these technologies, increasingly powerful and internalised, that have the potential to transform that which is, and give rise to new modes of being (i.e., Foucault’s ‘erase man like a face drawn in sand at the edge of the sea’).
Increasingly intelligent and embedded technologies are playing a more important, and, more importantly, a more ‘normalised role’ in peoples’ lives (i.e., they are on the path to becoming mundane, every day and ubiquitous). The application of such smart technologies has the very real potential to excite the fears cautioned by Baudrillard insofar as they necessitate the implantation of auto techno-devices into the human body, masking and hiding what the device might be monitoring, reporting or interfering with, in, for and on us. What greater ontological insecurity could there be than that created by a device that is autonomous and intimate, and, ironically, through this intimacy, may be out-with individual control and others sight?
What are the characteristics of an implantable technology that we might want to call smart?
Do the ‘smart’ qualities of these technologies have any social or practice implications?
First, drawing on our empirical data, we unpack the idea of ‘smart’ in relation to four specific implanted devices, namely cochlear implants (CI), implantable cardiac defibrillators (ICD), in vivo biosensors (IVBS), and deep brain stimulators (DBS), each of which were singled out during data collection as exemplars of current ‘smart’ technologies. We conclude that the term holds multiple and not always compatible meanings. Second, we suggest that ‘smart’ can carry a ‘sting’, or rather multiple stings relating to, on the one hand, being complex and responsive, and on the other hand, igniting concerns about lack of control and vulnerability. We end with the thought that smart technology can give autonomy whilst simultaneously taking it away. However, the (partial) erasure of the (techno)-human is a highly variable process partly related to the functionality and accessibility of the implantable smart device.
Defining ‘Smart’ in ISTs
Smart technology is where you’re using information, I would say smart technology is where you get information from sensors, or from imaging, or from some other way of measuring what’s going on, and you make a decision based on that.
In a regulatory context, I’ve never heard [smart] used. … What would it mean to me? I suppose … it would be something along the lines of … these type devices, but ones that are more sophisticated. I would say that, if they’re implanted and they’re smart, then they’ll act on chemical signals within the body that would then end up doing something else. … A lot of these do similar sorts of things: pick-up signals within the body and then do something. … You’d assume that ‘smart’ is looking at probably a wider variety of factors and maybe could do more than one thing as well.
I think this term smart just depends on how people want to use it actually. It seems to me it’s got to do with complexity, it’s got to do with responsiveness.
What would I envisage smart being? … Smart gives you the impression that you’re implanting something which is going to do something itself which the body might not be able to do. That’s how I suppose I would envisage what it means … Because it’s something that’s being implanted in the body in order to help it do something it’s not [doing], I suppose. But I see that, and from the purpose of smart, I see that as that sounds smart to me, more advanced.
Pacemakers and implantable defibrillators [are] checking what the body’s doing, and if it’s not doing it, they will take over, so it has almost got an artificial intelligence element to it, and it’s got decision-making algorithms that if this happens then you will do this, if this happens, it will do that.
It would be a closed loop. The way that would work, the chip would be in there, so inside there you would measure pressure then almost certainly transmit that out [to] a computer. But the computer would then have to decide, ‘Is this abnormal for this person? If it is, we need to release some glaucoma drug. So then it would transmit back into say, ‘Release the drug’. So it is a closed loop.
It seems like an irrelevant contingency if it was wirelessly sending something to be processed on a laptop on the other side of someone’s hospital room or something, and then wirelessly getting information to perform in some way. That seems just as smart as if the processing was being done within a bigger, more complicated, more heavily powered device, all inside the person. That doesn’t seem to be an interesting distinction that takes away its smartness, just because there was wireless communication with some more processing power somewhere else. It does seem to be less smart if all [the device] is doing is giving information to a clinician who then goes and gets a syringe and delivers treatment. That seems to be just monitoring, [which] isn’t smartness. It’s about the responsiveness but some of that responding loop could be on the other side of the room, or the other side of the world.
So the functions performed by the device could be performed both internally and externally, but the point is that the ‘loop’ is closed to the patient and physician; decisions are made by the device or the system that the device is ‘plugged into’, which also highlights (1) the ‘autonomy’ element to the device and (2) the loss of human control.
Yet, for the person who is using the smart device to enhance quantity or quality of life, there are different consequences for individual autonomy dependent on the functioning and capability of the device itself. Hence, returning to Foucault’s ‘erasure’ of the (hu)man, we can argue that partial erasure depends on the effects of loss of physiological control. This is, to a certain extent, is dependent on the intended functioning of the device.
So, something that somehow expresses that continuousness between a little tiny very skilled but highly specialised doctor inside you talking to a more skilled doctor outside you who then consults a whole team of [specialists] to say what to do next.
I think it probably makes [the device] smarter if it leads to an improvement in treatment. I think you can’t really just use these devices for, ‘Isn’t it clever we’re sensing something, but we can’t do anything about it’…so I think it has to probably be a closed loop to improve the clinical result.
A lot of these are put in for specific correction of function. Pacemakers, cochlear implants, they’re not really sensing as such. The abnormality is already known, and it’s there to correct it, or kick-in if it’s detected. Whereas, some of these newer smarter [devices], I think, are [in] more fluid situations. There’s more uncertainty around the readings and what they’re going to mean, and when they’re going to need to be used. I can imagine the glucose sensor and insulin pump would give varying amounts of insulin day to day because there’s going to be varying activity, for instance, going on in the patient, so it’s a much more dynamic, fluid situation [emphasis added].
Without this aspect of automation of the therapeutic component, some questioned the basis for considering a device ‘smart’. In this regard, truly smart devices such as ICDs could be differentiated from CIs and IVBSs.
… The brain adapts to that way of hearing … through a process of us fine-tuning the devices, because it’s not just one appointment, they come to see us about seven times in the first year just for us to fine-tune it and to keep the levels, like up here and they’re backwards and forwards.
Hence, Respondent 4, a CI specialist, confirmed that success with CIs was “probably 15 % technology and 85 % person; the things that are important are good surgery to put the implant in place, good programming to tailor the device for the individual, and then individual patient compliance, perseverance and luck.” Baudrillard’s analysis of the ontological insecurity of an implanted modifications is a key starting point to examining the effects of invisible technologies. Although where the recipient of the information from the closed loop system may not matter, as discussed earlier, the fact that the device is inside the body and therefore, ironically, out of reach contributes to the lack of control. CIs are still partly controlled by the recipient removing the transmitter that is placed behind the ear; a transmitter that when it is worn is visible. Baudrillard’s insight is that the invisibility of the smart device causes angst in the bystander. And although Baudrillard demonstrates the darker side of implanting biotechnologies, he does not discuss this in relation to recipient consequences and the varying effects that different invisible technologies can have on those they are implanted in. Closed loop systems that are inserted into the interior of the body offer therapy through the ability of the device to sense and respond autonomously. But there is a ‘sting’ that comes from the smart, and we turn to that shortly.
… words like ‘micro’ and ‘nano’, get used and abused widely. It’s not a terribly helpful word because there are some things that are very smart and require a great deal of machine intelligence and processing.
For others, depending on the criteria focused on (as between intelligence, autonomy, and responsiveness), one might categorise devices differently (i.e., as ‘smart’ under one criteria but as ‘not smart’ under another; a CI is not smart as it does not have a sensory ability, but it is smart as it delivers a therapy; a DBS and ICDs are smart as they are both sensitive and responsive). All this makes authoritatively categorising devices both difficult and problematic. Also when respondents were discussing particular technologies as opposed to “smart” in the abstract, they were not necessarily consistent about what were the most important facets of smart. Their assessment of smartness in practice was contingent on other factor, such as the newness of the technology (hence ICDs were seen as smarter than pacemakers), familiarity with the technology (some participants were normalising what they were working on and with and therefore tending to have views on auto biotechnologies they knew less about as more complex. For example Respondent 2’s familiarity with IVBs but not DBS.) Relatedly, where it was placed in the body could influence whether something was also viewed as more or less smart, though this wasn’t usually mentioned in the abstract discussions of smart (e.g., DBS tended to be seen as smarter as it was located in the brain and was inaccessible whereas CIs were sometimes viewed as less smart because their location was not thought significant and they were accessible.
sensitivity (i.e., record and transmit information about an environment. Increased smartness is linked to complexity of what is being sensed and processed);
responsiveness (i.e., reacts to changes in the environment and amends functions hence increased sensitivity linked to level of automation/closed loop);
autonomy (i.e., performing functions the body wouldn’t normally perform).
At one end of the continuum, smartness demands replacement of mechanical or sensory functions, generally allied to either preparation of data for internal processing, or transmission of data for external reporting and analysis. A higher level of smartness would include the possibility of direct or remote updating and adjusting of the instructional code which underlies the device (i.e., the software), thereby allowing the device’s actions to be modifiable (or the scope of the monitored criteria to be alterable). The highest level of smartness would involve the device being able to alter its monitoring criteria based on an autonomous assessment of its environment, and to adjust delivery of treatment based thereon. At the other end of continuum introduces a qualitative change in the way that devices implanted in the patient relate to the body; they manage, and will soon deliver, a sensory-based automated treatment platform that reacts to changes in their physiological environment.
The ‘Stings’ in ‘Smart’
sense complex and dynamic physiological settings subject to multiple changes;
measure these potentially subtle changes and collate the data;
transmit this dynamic information in real-time and;
react to the changes in real-time by providing alterations in therapy (also delivered by the device).
Such devices shift treatment well into the sphere of ‘automated medicine’, but key technical hurdles remain in relation to their practical application in the clinical setting, not least being the need to harness sufficient power for sufficiently long periods of time, and to store (or have ready access to) sufficient treatment compounds to give them appropriate endurance. The issue of powering such complex devices is obviously a technical ‘sting’ that is being worked on (and was highlighted in the first empirical encounter we held: the workshop in 2009). However, there are also some other important ‘stings’ associated with ISTs that might be classified as social and regulatory. The two most obvious ones are (1) decreased intentionality, and (2) increased vulnerability, to which we now turn.
First and foremost, as foreshadowed by the above references to closed loops (which, of course, have some positive implications), there is the real possibility that the physician could recede from the treatment relationship and simultaneously lose control over the medical intervention (recognising, of course, that that control is typically exercised in collaboration with the patient). While the physician would (presumably) be implicated in the initial programming of the device (i.e., articulating the parameters of ‘normal’ functioning), much of medical monitoring and action afterwards would be left to the machine intelligence. In this way, the physician is less ‘in touch’ with the treatment regime and the patient, and so is potentially less sensitive to the patient’s needs, including the need for the patient’s device to be adjustment. This reduction in the physician’s intentionality and immediacy could lead to an erosion in the doctor-patient relationship, which often relies on points of contact and positive interaction in order to build the trust needed for the most constructive relationships. At base, this means that, if the concept of ‘compassionate care’ is to drive healthcare forward (and that is certainly the policy objectives at the moment), then serious thought will have to be given with respect to how these devices enter into and reshape the clinical relationship.
Additionally, there may be a loss of control on the part of the patient, and this can diminish the responsibility the patient feels toward their own condition. Frequently presented as a ‘pacemaker for the brain’, deep brain stimulators (DBSs) comprise electrodes implanted in the brain, a pulse generator implanted in the chest (near the collarbone), and a subcutaneous wire connecting them. Intended to alleviate tremors, stiffness, and slowness caused by Parkinson’s disease, reports suggest that DBS may have implications for improving lung function, memory, and mood disorders such as depression . DBSs have been the subject of intense investigation; studies have uncovered (1) very different expectations for, and tolerances about, chronic illnesses and the side effects of their treatment [16, 49]. (2) The variety and progression of emotional response to DBS, and (3) the need for greater cooperation between stakeholders to create realistic public perceptions of DBS [15, 37]. DBSs have also been the subject of legal concern, for they have been known to cause significant personality change, which can have implications for capacity . So, for example, with monitoring and treatment removed from the patient and located in the implanted device, the patient might take less seriously the personal responsibility of adopting steps to otherwise maintain a healthy condition. On this, it has been argued that, although DBSs can significantly improve symptoms, they also take control out of the hands of the patient/person who has been managing their illness up to then .
they have been in use longer than the other subject ISTs and so are more pervasive ;
In any event, CIs have profound life and socialisation implications, which has resulted in strong resistance in some circles and hence has produced an affirmation, in some cases, of individual and group deaf identity. In other words, the patient might experience the same powerlessness (or functionless-ness) that the physician might feel, thereby further contributing to a weakening of the healthcare partnership that supports the best health outcomes. Again, this is not a given, but the potentiality of this requires us to pay attention to how devices impact on patient behaviour and clinical relationships as they become more ubiquitous.
[D]efibrillators [are] a mixed blessing really. On the one hand, it’s reassuring [that] it’s there, and if you need it, it goes off, but if it does go off, either it’s gone off correctly—but that brings its own concerns of, ‘Why did it go off?’ ‘Am I about to die?’ ‘Is something terrible going to happen to me?’—or it goes off inappropriately, and it’s quite painful, and you think, ‘I haven’t got any control over it, I can’t stop it’. And that can be quite difficult for patients.
The suddenness. Always just the suddenness of it. There’s no warning. I’m saying no warning, maybe just a minute or two [before] you say to yourself, ‘I’m not right’, and then it kicks in. You know in yourself, ‘I’m not right’; I know anyway, something’s going to happen. … I can’t even say it’s a physical thing.
In essence, as the case with both ICDs and DBS show, there could emerge a sense of being at the mercy of not only the foibles of one’s (failing/ailing) body, but also the programmed actions of the device. In short, the device has an unpleasant intervention but also signals an untoward event, knowledge of which can be disconcerting. Implantation of ICDs is major operation and there are lifestyle and physiological adjustments to be made. Patients are dependent on the device to rescue them in the event of sudden heart arrhythmias. ICDs are known to cause anxiety and depression in patients [25, 27, 35, 41, 46], avoidance of physical activity and sexual contact, and effect family relationships. The more shocks felt by the patient the higher the anxiety [50, 52]. As well as the autonomy of the technology linking to the patient’s lack of control, Baudrillard’s point made earlier about visibility could again be highlighted here. In the case of ICDs, the technology is almost invisible (only making its presence felt on the occasions it is activated) leading to the unsettling lack of clarity about the human/technology distinction.
As mentioned, IVBS are less intrusive than ICDs and their functionality was viewed by some of our participants as less ‘smart’ due to the devices lack of a closed loop system. Indeed, early social science research with recovering prostate cancer patients demonstrates a willingness to accept IVBSs, and even an enthusiasm for a more ambitious functionality that goes beyond a beacon system (i.e., identifying the timing and location for radiotherapy) and toward an IVBS that is a long-term surveillance system to assess the reappearance of cancer tumours. Having said that, periods of ‘acclimatisation’ were mentioned as being necessary , and willingness to have an IVBS may vary depending on the circumstances around their implantation and the concomitant evaluation of what will be lost and gained from the IVBS? That is, some loss of autonomous control and increased vulnerability versus sensing and responding to a tumour recurrence.
The data becomes insecure, as soon as it leaves this device … but that’s not an implant problem, that’s a general medical data problem. The inside-the-body stuff I don’t think is a concern. The concern kicks in when you get outside the body and you start doing something with the data. Because nobody can get at the data unless they’re right up against you. I suppose, potentially, if someone really wanted to measure your tumour hypoxia while you’re on the Underground they could press something up against you and measure it. We’re into the silly world there.
As a lawyer, what I would be interested in is the extent to which [the device] could be, or is, controlled remotely, for example. Or its ability to self-regulate, and in what ways, and if it is self-regulating, then what other controls could be exercised over it if it became inappropriate or unsafe.
The pacemaker companies, I think, are worried that if their pacemaker is compromised then they could be liable, and it might be perceived that they haven’t made their device secure enough for the patient. So at the moment, all we can do is obtain diagnostic information, but we can’t reprogramme pacemakers remotely.
device identification by authorised entities;
data access and device reconfiguration by appropriate entities;
software upgrading by appropriate authorities;
multi-device coordination and communication; and
manufacture audit capabilities in the event of failure,
In an environment surrounded by technologies defined as smart, it is important to refine what it is that offers the most benefits and also risks from such interventions inside the human body. We have discussed how smart in the context of medical technologies such as ICDs, IVBs, CIs and DBSs can be thought of as relating to autonomy, responsiveness and complexity. However, the ability to manage a physiological feedback mechanism—the so called ‘closed loop’—originally featuring in how humans could cope with space travel and the genesis behind the term ‘cyborg’—featured highly in respondent accounts. Yet regardless of the smartness of the technology, each of these implants affected individual, family, and group relationships in unintended and diverse ways, whether it was anxiety and depression in some ICD patients, identity challenges for DBS, resentment in areas of the CI group, or needing to ‘acclimatise’ in some potentially unique ways to an organic/inorganic status, cyborg or techno-human in the IVBs patients.
One possible interpretation of the word smart not mentioned in accounts, is when smart is used as an adverb to mean a sharp, stinging pain. Hence, the ‘stings’ associated with smart implantable technologies are that the devices themselves are out-with the control of the implanted individual and often his or her physician, thereby reducing human intentionality, and with increasing complexity and connectivity, generating new vulnerabilities for the patient and healthcare providers. However, we have come to believe that taking experiential views and variable individual and group reactions to implantable smart technologies enriches philosophical discussions. Both Foucault and Baudrillard, for example, point to the ontological status of the human; the former to the possible erasure, and the latter for the angst caused by a process of indiscernible technological replacements. In a sense, we point to the unevenness of such a trajectory, and to the partiality of application as well as ambivalent compliance. We feel this is sometimes missed in theoretical discussions of the post human, and this was the case in the earlier discussions, and we have interpreted from Foucault but may possibly apply to other later work. We accomplished this philosophical enrichment by using data from interviews with stakeholders to address the question of what is smart in ISTs. It was never our intention to produce a statistically generalizable result that would represent an evidence base on which other technologies can be assessed or judged. On the contrary, this exercise demonstrated that it is possible to investigate the current landscape by identifying those involved and producing an account of social, political and legal issues that is, in this case, based on the ‘inside views’ of those involved.
This leads us to argue that implanting medical technologies cannot be separated from the social practices and broader environment in which they take place. Much has been written about ‘technological affordances’ and the middle ground between an under-socialised or over-socialised view of technology seeking to locate the consequences of the technology within its current social setting—smart devices offer opportunities to enhance both the quality and quantity of life yet at the costs of control and heightened vulnerability. Further, implanted technologies are physically hidden but their affordances—their actions—emerge during the interactions with human practice. ‘Action possibilities’ are therefore simultaneously consequentialist being both positive and negative in the same way that smart technologies are autonomous but also stinging and smart. We conclude that ‘smart’ relies on the presence of several attributes (e.g., autonomy, responsiveness, complexity) and is described by a continuum. Understanding the elements of smart will allow for the better regulatory classification of devices and the issues to which they might give rise.
An initial pilot workshop in 2009; semi-structured qualitative interviews with thought-leaders and stakeholders in the field in 2013–14; and close collaboration with an artist (from 2012 to 2014) who both helped shape the interview process and generated interpretive artistic outputs relevant to our findings. For more on the ISTP, see http://masoninstitute.org/our-research/implantable-smart-technologies-technical-social-and-regulatory/.
Five with professionals working at the legal/ethical interface of medical technologies, three with medical practitioners and researchers, one with a clinical scientist, one with an engineer, and one with a patient reliant on an implanted device.
- 1.Baudrillard, J. (1996). The system of objects. London and New York: Verso.Google Scholar
- 3.Bjorn, P., & Markussen, R. (2013). Cyborg heart: The affective apparatus of bodily production of ICD patients. Science and Technology Studies, 26(2), 14–28.Google Scholar
- 7.Clynes, M. E. & Kline, N. S. (1960). Cyborgs and space. Astronautics, 26–27 and 74–75.Google Scholar
- 10.Foss, L., & Rothenberg, K. (1988). The second medical revolutions: From biomedicine to infomedicice. Boston: New Science Library.Google Scholar
- 11.Foucault, M. (1966). The order of things: An archaeology of the human sciences. London: Routledge.Google Scholar
- 12.Friedewald, M. R., Lindner, R. & Wright, D. (Eds.), Policy options to counteract threats and vulnerabilities in ambient intelligence, SWAMI Deliverable D3: A report to the SWAMI consortium to the European Commission under contract 006507. 2006: http://swami.jrc.es.
- 15.Gilbert, F. & Daniela, O. (2011). Deep brain stimulation in the media: Over-optimistic media portrayals calls for a new strategy involving journalists and scientifics in the ethical debate. Journal of Integrative in Neuroscience, 5, 16.Google Scholar
- 17.Glaser, G., & Strauss, L. (1967). The discovery of grounded theory: Strategies for qualitative research. Chicago: Aldine Publishing Company.Google Scholar
- 19.Halperin, D., et al. (2008). Pacemakers and implantable cardiac defibrillators; software radio attacks and zero-power defenses. In Proceedings of the 2008 IEEE Symposium on Security and Privacy (pp. 129–142). Los Alamitos, CA: IEEE Computer Society.Google Scholar
- 21.Harmon, S. H. E. & Chen, K.-H. (2012). Medical research data-sharing: The ‘public good’ and vulnerable groups. Medical Law Review.Google Scholar
- 22.Harmon, S., Haddow, G. & Gilman, L. Implantable smart medical devices: An empirical examination of characteristics, risks and regulation. Law, Innovation and Technology (forthcoming).Google Scholar
- 23.Hassan, K., Andrews, J. G., & Frey, W. (2009). In-vivo communication using blood vessels as the transport channel. In 2009 Conference record of the forty-third Asilomar conference on signals, systems and computers.Google Scholar
- 24.Haugeland, J. (1985). Artificial intelligence: The very idea. Massachusetts Institute of Technology (p. 287).Google Scholar
- 31.Leigh, I. W., et al. (2008). Correlates of psychosocial adjustment in deaf adolescents with and without cochlear implants: A preliminary investigation. Journal of Deaf Studies and Deaf Education.Google Scholar
- 33.Martin, T., Jovanov, E., & Raskovic, D. (2000). Issues in wearable computing for medical monitoring applications: A case study of a wearable ECG monitoring device. In The fourth international symposium on wearable computers.Google Scholar
- 38.Radcliffe, J. (2011). Hacking medical devices for fun and insulin: Breaking the human SCADA system. https://media.blackhat.com/bh-us-11/Radcliffe/BH_US_11_Radcliffe_Hacking_Medical_Devices_WP.pdf.
- 40.Seale, C. (2004). Researching society and culture. London: Sage.Google Scholar
- 45.Special issue: Ethical issues in enhancement technologies introduction. Kennedy Institute of Ethics Journal, 20(2), VII–VIII. https://muse.jhu.edu/. Accessed 25 Nov 2015.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.