Ecotoxicology

, Volume 17, Issue 5, pp 344–361

Nanoparticle analysis and characterization methodologies in environmental risk assessment of engineered nanoparticles

Authors

    • Department of ChemistryUniversity of Gothenburg
  • James W. Readman
    • Plymouth Marine Laboratory
  • James F. Ranville
    • Department of Chemistry & GeochemistryColorado School of Mines
  • Karen Tiede
    • Central Science Laboratory
    • Environment DepartmentUniversity of York
Article

DOI: 10.1007/s10646-008-0225-x

Cite this article as:
Hassellöv, M., Readman, J.W., Ranville, J.F. et al. Ecotoxicology (2008) 17: 344. doi:10.1007/s10646-008-0225-x
  • 3.6k Views

Abstract

Environmental risk assessments of engineered nanoparticles require thorough characterization of nanoparticles and their aggregates. Furthermore, quantitative analytical methods are required to determine environmental concentrations and enable both effect and exposure assessments. Many methods still need optimization and development, especially for new types of nanoparticles in water, but extensive experience can be gained from the fields of environmental chemistry of natural nanomaterials and from fundamental colloid chemistry. This review briefly describes most methods that are being exploited in nanoecotoxicology for analysis and characterization of nanomaterials. Methodological aspects are discussed in relation to the fields of nanometrology, particle size analysis and analytical chemistry. Differences in both the type of size measures (length, radius, aspect ratio, etc.), and the type of average or distributions afforded by the specific measures are compared. The strengths of single particle methods, such as electron microscopy and atomic force microscopy, with respect to imaging, shape determinations and application to particle process studies are discussed, together with their limitations in terms of counting statistics and sample preparation. Methods based on the measurement of particle populations are discussed in terms of their quantitative analyses, but the necessity of knowing their limitations in size range and concentration range is also considered. The advantage of combining complementary methods is highlighted.

Keywords

NanoparticlesNanoaggregatesNanometrologyAnalytical chemistryParticle size analysis

Introduction

Due to the extensive current, and foreseen future investments, in nanotechnology, nanoparticles used in consumer products, industrial applications and health care technology are likely to enter the environment (Aitken et al. 2006; Roco 2005). To ensure sustainable development of nanotechnology, there is a need for risk assessments of engineered nanoparticles (ENP) introduced from various applications (Colvin 2003; Maynard et al. 2006). Such risk assessments, require proper tools and methodologies to carry out both effect and exposure assessments (EPA 2007; Maynard et al. 2006; SCENIHR 2005; Crane and Handy 2007). Conventionally, exposure assessment is recommended to include both a modeling and a measurement approach (Holt et al. 2000); both approaches require instrumentation and analytical methods. Prediction of environmental concentrations of ENP through modeling is based on emission scenarios (from production volumes and life cycle assessments) and partitioning parameters (fate and behavior). Presently, little is known about the fate and behavior parameters of ENP. Hence, development of suitable analytical methods are required to determine concentrations and nanoparticle characteristics in complex environmental matrices such as water, soil, sediment, sewage sludge and biological specimens. The approach for prediction of environmental concentrations through modeling requires validation through measurement of actual environmental concentrations. For ENPs that are only recently being introduced into the environment, extremely sensitive methods are required. Although direct observations are not hampered by the underlying assumptions of exposure modeling, it is very important to assure that direct observations are representative in time and space for the regional setting to which the observation will be allocated (local or regional).

ENP differ from most conventional “dissolved” chemicals in terms of their heterogeneous distributions in size, shape, surface charge, composition, degree of dispersion, etc. Therefore, it is not only important to determine their concentrations, but also several other metrics.

In addition to exposure assessment requirements, it is essential that characterization of ENP dispersion states (i.e. aggregated or dispersed), and measurements of “steady-state” concentrations are used in effect assessment test systems (e.g., toxicity testing). It has been found that the ENP concentrations are often not sustained in dispersions throughout an experiment (Federici et al. 2007). Although this need was not recognized in the pioneer studies in nanoecotoxicology, it is now starting to be implemented in most effects experiments. In a recent review (Hansen et al. 2007), it was shown that although size determinations are becoming more common (17–96% of exposure and effects studies), other relevant characterization properties are rarely determined (e.g., surface area in only 6–33% of studies). An additional complication relates to stability. For example, Fig. 1 demonstrates that Buckminster fullerenes readily degrade and are highly reactive (Taylor et al. 1991). Indeed, it is the reactions of Buckminster fullerenes that render them of particular interest when investigating their potential applications in nanotechnology (Taylor 2006). This reactivity has substantial implications in interpretation of environmental behaviour and ecotoxicological impact.
https://static-content.springer.com/image/art%3A10.1007%2Fs10646-008-0225-x/MediaObjects/10646_2008_225_Fig1_HTML.jpg
Fig. 1

C60 fullerene solutions (in toluene) stored under dark and light conditions (Photograph courtesy of P. Frickers and J.W. Readman). The photo illustrates the potential of using spectroscopic methods to study these photochemical changes in structure or surface chemistry

Assessing uptake and bioaccumulation in biological matrices are essential and will be equally as challenging as analyses of complex environmental media. Furthermore, some good laboratory practices and harmonized methods still need to be developed. Due to both the complexity of the behavior of nanomaterials in dispersions and the requirements for expertise in state-of-the-art methods in ecotoxicology testing and nanoparticle characterization, the necessity for interdisciplinary collaboration has been highlighted (Crane and Handy 2007; Handy et al. 2008). This paper focuses on mature and validated methods that are commercially available and/or fairly easy to setup. Consequently, highly specialized methods in the development phase, or methods requiring large-scale facilities such as synchrotron sources, are not discussed.

Nanometrology, analytical chemistry and particle size analysis

The physical properties of nanoparticles are referred to as metrics (Table 1) and the field of science which aims to standardize physical measurements at the nanometer scale, is called nanometrology. Even though nanometrology is a young field regarding definitions and terminology, many concepts are borrowed and adopted from the fields of particle size analysis (Barth and Flippen 1995) and physical chemistry. In addition to the physical properties, nanoparticles can be described by their chemical composition where the compound or species determined is called the analyte (Table 1).
Table 1

A list of physical properties (metrics), and a list chemical compositions, analytes and respective associated methods and instruments

 

Instruments and methodsa

Physical properties/metrics

 

Diameter

EM, AFM, Flow-FFF, DLS,

Volume

Sed-FFF

Area

EM, AFM

Mass

LC-ESMS

Surface charge

z-Potential, electrophoretic mobility

Crystal structure

XRD, TEM-XRD (SAED)

Aspect ratio or other shape factor

 

Chemical composition/analytes

Elemental composition

Bulk: ICP-MS, ICP-OES, single nanoparticle: TEM-EDX, particle population: FFF-ICP-MS

Fluorophores

Fluorescence spectroscopy

Fullerene (“molecules”)

UV–vis, IR, NMR, MS, HPLC

Total organic carbon

High temp chemical oxidation

Other properties not falling within the above classes

Aggregation state

DLS, AFM, ESEM, etc.

Hydrophobicity

Liquid–liquid extraction chromatography

Dissolution rate

Dialysis or voltammetry or spectrometry

Surface chemistry, coating composition, # of proton exchanging surface sites

Optical or X-ray spectroscopic methods, acid–base titrations

aFor abbreviations see text

The metric “particle diameter” is probably the most commonly used descriptor of particle size, but a single diameter value is only enough to describe a perfect spherical particle. Non-spherical nanoparticles (or colloids) are, however, common in the environment and it is actually common for nanoparticles to have very large aspect ratios (e.g., clay platelets, rods or fibrils). Many engineered nanoparticles share these features (e.g., carbon nanotubes, nanowires, nanoclays, nanorods). It has been shown that the toxicity can be shape dependent (Pal et al. 2007), and nanoparticle reactivity can be dependent both on size and shape (Madden and Hochella 2005). There are several different diameter measures that correspond to an equivalent size of a specific type (Table 2). Different particle size analysis methods also yield different equivalent sizes (Table 2), which is important to consider when comparing size values obtained using different methods. Another important feature in method comparisons is that different techniques give different size averages, depending on if they fundamentally rely on an instrument response to: particle numbers, volume, mass or optical property (e.g., light scattering) (Table 3). These averages can be the same for spherical, monodisperse particles (with an infinitely narrow size distribution) This, however, is usually not the case. Each method also has its limitations in applicable size and concentration ranges (Table 4). Therefore, it has to be taken into account that there may be part of the nanoparticle (or nanoparticle-aggregate) size distribution that is “hidden” for the applied method. Some relevant terms and definitions from analytic chemistry, nanometrology and particle size analysis is given in Table 5.
Table 2

Different equivalent sizes measured by different methods

Equivalent spherical size measures

Applies to method

 

Hydrodynamic diameter

Flow-FFF, DLS

Calculated from the measured diffusion coefficient, using Stokes–Einstein equation

Equivalent spherical volume diameter

Sed-FFF (if known density), LIBD, electrozone sensing

 

Buoyant mass

Sed-FFF

SedFFF ∝ Δδ*V

Equivalent spherical mass diameter

MS

Assume a certain structure

Projected area

Microscopy

 

Equivalent molar mass

Ultrafiltration

Molecular weight cutoff (MWCO), defined from retention of proteins

Equivalent poresize diameter

Particle filtration

Filter poresize often defined as maximum size that penetrates filter

Root mean square radius of gyration

SLS

mean square distances from center of mass of point masses within the particle

Aspect ratio

Microscopy, combination of light scattering methods or different FFF methods

The longest dimension divided by the shortest for symmetrical particles (e.g., rods & ellipsoids)

Table 3

Description of different types of size averages, with equations defining them and methods that are deriving such average sizes

Type of size average

Applies to method

Equation

Number average: size average of numbers of particles within a certain size class

Microscopy, LIBD

\( \ifmmode\expandafter\bar\else\expandafter\=\fi{d}_{n} = \frac{{{\sum\nolimits_i {n_{i} *d_{i} } }}} {{{\sum\nolimits_i {n_{i} } }}} \)

Mass or volume average: size average of volume of particles within a certain size class

FFF and SEC with most detection methods, CFF

\( \ifmmode\expandafter\bar\else\expandafter\=\fi{d}_{v} = \frac{{{\sum\nolimits_i {V_{i} *d_{i} } }}} {{{\sum\nolimits_i {V_{i} } }}} \)

Z-average size, an intensity weighted average attributed to certain methods

Dynamic light scattering

\( \ifmmode\expandafter\bar\else\expandafter\=\fi{d}_{n} = \frac{{{\sum\nolimits_i {n_{i} *d^{6}_{i} } }}} {{{\sum\nolimits_i {n_{i} *d^{5}_{i} } }}} \)

Table 4

Specifications of methods for analysis and characterization of nanoparticles

Method

Approximate size range (nm)

Limit of detectiona

Single particle or particle population methods

Level of sample perturbation

AFM

0.5 to >1000

ppb–ppm

sp

Medium

BET

1 to >1000

Dry powder

pp

High

Centrifugation

10 to >1000

Detection dependant

pp

Low

Dialysis

0.5–100

Detection dependant

pp

Low

DLS

3 to >1000

ppm

pp

Minimum

Electrophoresis

3 to >1000

ppm

pp

Minimum

EM-EELS/-EDX

Analysis spot size: ∼1 nm

ppm in single particle

sp

High

ESEM

40 to >1000

ppb–ppm

sp

Medium

ES-MS

<3

ppb

pp

Medium

FFF

Flow FFF: 1–1000

Detection dependant; UV: ppm, Fluo&ICP-MS: ppb

pp

Low

Sed FFF: 50–1000

HDC

5–1200

Detection dependant

pp

Low

ICP-MS

Depends on fractionation

ppt–ppb

pp

 

LIBD

5 to >1000

ppt

sp

Minimum

Microfiltration

100 to >1000

Detection dependant

pp

Low-medium

SEC

0.5–10

Detection dependant

pp

Medium

SEM

10 to >1000

ppb–ppm

sp

High

SLS

50 to >1000

 

pp

Minimum

TEM/HR-TEM

1 to >1000

ppb–ppm

sp

High

TEM-SAED

Analysis spot size: 1 nm

 

sp

High

Spectrometry

 

ppb–ppm

pp

Minimum

Turbidimetry/nephelometry

50 to >1000

ppb–ppm

pp

Minimum

Ultrafiltration

1–30

Detection dependant

pp

Medium

WetSEM

50 to >1000

ppm

sp

Low

WetSTEM

 

ppm

sp

Low

XRD

0.5 to >1000

Dry powder

pp

High

aFor comparison mass concentration limit of detection for 100 nm particles are estimated

There are some special challenges for studies of ENPs in environmental samples. The first challenge is that for environmentally relevant concentrations (ng l−1–pg l−1), the detection limits for most methods are not sufficiently low. The second challenge is that in environmental samples there is a high background of natural and unintentionally produced nanoparticles (Banfield and Navrotsky 2001; Filella 2007; Hochella and Madden 2005; Lead and Wilkinson 2006; Waychunas et al. 2005; Wigginton et al. 2007).

A strategy for coping with these challenges may be to combine existing and new methods that afford both a screening capability and a highly selective detection. These techniques, however, can be developed and tested under less stringent experimental conditions (with higher concentrations) to investigate behaviors, fates and effects.
Table 5

Analytical chemistry, metrology and particle size analysis definitions

Term

Definition

Metric

The property that is being quantified

Analyte

The compound or specie that is being quantified

Limit of detection

The lowest concentration that can be distinguished from the background, typ defined as 3*Stdev (blank measurements)

Precision

The statistical spread of values in a measurement series

Accuracy

The exactness of the averaged measurements related to the true value

Measurement uncertainty

The accumulated uncertainty incl. method, lab, between days and between lab biases

Method validation

Experimental proof that the method conforms according to the specifications

Reference material

A material or substance that is sufficiently homogeneous for its property values to be used for calibration of instruments or assessment of methods

Certified reference material

A reference material that is accompanied by a certificate that specifies the traceability of the CRM and associated uncertainty

Control sample

Within laboratory quality control over time and between interlaboratory comparisons

Interlaboratory comparison

A blind test between participating laboratories to quantify deviation from true or reference value

Number based concentration

Determinations of number of particles per unit volume or mass

Mass based concentration

Determinations of mass of particles per unit volume or mass

Number average based size

The size average of numbers of particles within a certain size class: \( \ifmmode\expandafter\bar\else\expandafter\=\fi{d}_{n} = \frac{{{\sum\nolimits_i {n_{i} *d_{i} } }}} {{{\sum\nolimits_i {n_{i} } }}} \)

Volume average based size

The size average of volume of particles within a certain size class: \( \ifmmode\expandafter\bar\else\expandafter\=\fi{d}_{v} = \frac{{{\sum\nolimits_i {V_{i} *d_{i} } }}} {{{\sum\nolimits_i {V_{i} } }}} \)

Z-average based size

A light scattering based average: \( \ifmmode\expandafter\bar\else\expandafter\=\fi{d}_{n} = \frac{{{\sum\nolimits_i {n_{i} *d^{3}_{i} } }}} {{{\sum\nolimits_i {n_{i} *d^{2}_{i} } }}} \)

Polydispersity index

Weight average size/number average size

Dispersion, sampling and sample handling

Dispersion of nanoparticles for both exposure and effect assessments

Colloidal systems are dynamic non-equilibrium systems and are often sensitive to physical or chemical disturbances (Filella 2007). Sampling and laboratory procedures (e.g., pumping, mixing, etc.) that introduce shear forces are likely to perturb the dispersion state of ENPs, possibly leading to either further aggregation, or to partial disruption of existing aggregation. The presence of natural organic matter and natural nanoparticles further complicates the situation. It is important to be aware of and characterize the interaction of the ENP with the natural material. It is equally important to compensate for any background material of the same composition as the ENP. Background levels of identical composition can be present for TiO2, SiO2 but also for carbon-based nanoparticles. Geological studies, using primarily transmission electron microscopy (TEM) to visualise the materials, have reported fullerenes in geological formations dating back 1.85 billion years (Becker et al. 1994), and CNTs together with fullerene–like structures in a Greenland ice core dated at approximately 10,000 years old (Murr et al. 2004). Given their reactivity, this is surprising (Taylor 2006), but infers that these carbon-based nanoparticles have natural as well as engineered origins.

In the case of ecotoxicological exposures to carbon nanoparticles, the preparation and characterisation of aqueous fullerene suspensions is especially challenging owing to their low solubilities. Fortner et al. (2005) describe nanoaggregate formation of C60 fullerenes in water. Particle sizes within the aggregates are, however, dependant on formation parameters including pH, ionic strength and even the mixing rates. The properties of the aggregates are different from the pristine particles. Coupled with the fact that fullerenes oxidise (Fig. 1), ecotoxicological exposure techniques are rendered highly complex. For carbon nanotubes (CNTs), their extremely low solubility in water, variable sizes of the particles, small diameters and the complexity of aggregates formed render dosing and particulate characterisations extremely difficult in aqueous exposure experiments. Nowack and Bucheli (2007) describe a standard procedure for solubilising CNT through cutting the tubes by sonication, and hydroxylation of the ends and damaged regions using strong acid. Other treatments to disperse the materials are reported using surfactants (Jiang et al. 2003) and biopolymers, including humic and fulvic acids (Hyung et al. 2007). Treatments to facilitate dispersion must, however, be accounted for in interpretation of toxic response and how environmental relevance may be affected.

Sampling

Due to the unstable nature of colloidal nanoparticle dispersions it is preferable to use in situ analyses, but these methods are rarely available (Lead and Wilkinson 2006). The second choice is to apply methodologies that cause minimum perturbation from sampling to analysis. An example of such techniques are the probing of dispersions with electromagnetic radiation (e.g., light, X-rays or neutrons) where the scattering/absorption patterns can be related to physical properties of the particles, as will be described below.

Sample contamination and loss

Sampling of nanoparticles should generally be feasible with most standard sampling protocols, but the handling procedures differ from many other chemicals. Samples of colloids from surface waters are often collected in bottles that have been selected for minimum adsorption and contamination, e.g., plastics, especially fluoroplastics, for inorganic colloids or metal analysis and glass for analysis of organic trace constituents (Hall 1998). Since engineered nanoparticles may consist of e.g., an inorganic core with an organic coating or surfactants, conventional material selections may have to be revised. Further, the nanoparticle surface charge and possible charges on the bottle walls of both plastic and glass at the specific pH should be taken into account. Consequently, for engineered nanoparticles, adsorption to sample bottles needs to be investigated for both inorganic and carbon-based nanoparticles on a case-by-case basis until new experience-based knowledge has been accrued. Similar concerns apply to all other materials to which the sample is being exposed (e.g., tubing, filter materials, pipettes, amongst others).

Extracting inorganic nanoparticles from soil and sediment

Examining ENP in soils and sediments have the same limitations as for water samples, with the additional complication of much higher quantities of natural solids, many of which are in the same size range as the ENP. Dispersion methods for releasing natural nanomaterials from the solid matrix, such as sonication and chemical dispersants (hexametaphosphate, detergents, etc.) will likely release the ENP to the solution phase, but the physicochemical state of ENP will be likely to change (e.g., break-up of flocs). These protocols are reported in the soil literature (Gee and Bauder 1986). The separation of nanoparticles from soil suspensions or sediment slurries are difficult, and are prone to artifacts. As a general suggestion, centrifugation is generally less perturbing than filtration (Gimbert et al. 2005, 2006), but the differential settling during centrifugation can also induce aggregation. This is further discussed in the “Prefractionation” section below. The challenge then remains to discriminate between natural and engineered nanoparticles.

Extracting carbon-based nanoparticles from water, soil and sediment

Pristine fullerenes are comparatively soluble in organic solvents such as toluene and can be extracted from media (including water) into solvent (Fortner et al. 2005). In the case of CNTs (both single and multi-walled), Nowack and Bucheli (2007) summarise that no method currently exists for their quantification in natural media. Indeed, CNTs have low solubility, even in organic solvents.

Prefractionation

Environmental samples often contain complex mixtures of particles of different size classes, composition, shapes and are of biotic and/or abiotic origin. In order to study nanoparticles, it is often necessary to first reduce the complexity using a course prefractionation. The prefractionation can be based on settling, centrifugation or filtration. Settling or centrifugation is only effective in removing particles that have a settling velocity that dominates over their Brownian motion. The settling velocity depends on the particle volume, shape, and their density difference with respect to water. Therefore, settling or centrifugation is more efficient in removing more dense mineral particles than it is for algae and other organic particles. Centrifugation is a minimum perturbation prefractionation technique, but settling particles can scavenge other smaller particles due to the differential settling velocities.

Microfiltration, with pore sizes generally greater than 0.1 μm, is the most common prefractionation technique, due to its simplicity of operation. However, common “dead-end” filtration is prone to many artifacts, e.g., nanoparticle deposition, membrane concentration polarization, and filter cake formation (Buffle et al. 1992; Morrison and Benoit 2001).

Nanoparticles can be deposited on the membrane surface due to collision or electrostatic attraction. Particles smaller than the pore size can be transported through the membrane more slowly than the liquid, due to electrostatic repulsion within the pores. This causes concentration polarization (build up of higher particle concentration in the membranes diffusive boundary layer) which leads to higher collision rates between particles and consequently aggregation. Aggregates or attached particles on the membranes, provides more efficient trapping of nanoparticles and their aggregates. This leads to formation of a filter cake and the effective pore size decreases severely; in other words the filter clogs.

These problems are especially severe for non-stabilized nanoparticles, e.g., those that lack hydrophilic surfaces. Therefore, filtration of engineered nanoparticle suspensions should be critically evaluated in terms of the scavenging of nanoparticles and, as a consequence, changing the size distribution.

Fractionation by ultrafiltration, nanofiltration and dialysis

Fractionation by membranes can either be done by applying a pressure to overcome the pressure drop across a membrane that sieves molecules or particles according to their size as in ultrafiltration or it can be done by letting solutes equilibrate across the membrane as in dialysis. The microfiltration artifacts mentioned above become greater as the pore size of the filter decreases (ultrafiltration and nanofiltration). This is especially critical where membranes are used as macromolecular sieves. In order to reduce the diffusive boundary layer over the membrane, and thereby minimize the concentration polarization over the membrane, cross-flow (or tangential) filtration (CFF) has been developed. In CFF the sample is recirculated (or stirred) in a reservoir on top of the membrane. A fraction of the sample with components smaller than the pore size, will pass through the membrane (to yield the permeate) in each cycle. By measuring the concentration of analyte in both the initial sample, the retentate (the fraction not passing through the membrane) and the permeate, it is possible to calculate the concentrations of analyte in the fractions smaller and larger than the membrane pore size. The performance of crossflow ultrafiltration has been extensively evaluated for natural colloids and reveals that the membrane type, membrane manufacturer, and operating conditions, have large influences on the fractionation results and recoveries obtained (Guo et al. 2000; Larsson et al. 2002; Liu and Lead 2006). Therefore, crossflow ultrafiltration should be appropriately tested and evaluated prior to application to ENPs. Ultrafiltration is a preparative size fractionation method that can be scaled to process large sample volumes and produce large quantities of isolated nanomaterials. Although it is limited to two fractions (above and below the membrane pore size), multi stage filtrations can allow for a crude size fractionation, however, this is extremely labor and time intensive. When the membrane pore size is below ∼1 nm, the method is typically defined as nanofiltration. Nanofiltration is usually applied to the separation of molecules from salts and could potentially be applied to separate nanoparticles from their dissolved counterparts.

Dialysis is an ultra- or nanofiltration method that operates on diffusion of solutes across a membrane that arises from concentration gradients and osmotic pressure instead of pressure driven filtration (as is the case in CFF). Dialysis is a very mild fractionation method and it can be used to separate truly dissolved components (ions and small molecules) from their nanoparticle counterparts. Dialysis has been used to study nanoparticle-solute sorption behavior as well as nanoparticle dissolution, where the aqueous counterparts will diffuse across the dialysis membrane (Franklin et al. 2007). However, dialysis usually utilizes deionized or distilled water as an acceptor solution. This may promote dissolution or ionic strength changes which will lead to changes in dispersion state.

Field-flow fractionation, size exclusion and hydrodynamic chromatography

Field-Flow Fractionation (FFF) is a mild chromatography-like size-fractionating method that differs from chromatography in that it does not utilize a stationary phase. The most common FFF sub-technique is Flow FFF, which is discussed here. Flow FFF separates nanoparticles according to their particle size by virtue of their diffusion coefficients in a very thin open channel (Giddings 1993; Hassellöv et al. 2007; Schimpf et al. 2000). The separation principle relies on the combination of an applied field and longitudinal carrier flow. The field acts perpendicular to the length of the separation channel and causes the nanoparticles to move towards the accumulation wall. Nanoparticles form a cloud whose thickness is given by the particles’ ability to oppose (generally through diffusion) the force of the field. Smaller particles will not be affected to the same extent as larger particles, and hence the smaller particles elevate higher in the channel. Perpendicular to the field, along the channel, the laminar separation flow is acting on the nanoparticles. The parabolic shape of the laminar flow velocity in the channel implies that particles traveling nearer to the middle of the channel move faster than particles traveling closer to the channel walls. Consequently, the smaller particles, having higher extending clouds, on average, travel faster than the larger particles, resulting in fractionation of the sample that provides a continuous size distribution. To monitor the size distributions, the FFF needs to be coupled to a detector that responds to the nanoparticle number or mass concentration. Examples include: UV absorbance, light scattering (von der Kammer et al. 2005b), or elemental detectors such as ICP-MS (Hassellöv et al. 1999; Ranville et al. 1999; Jackson et al. 2005). The latter detector is very useful for characterizing metal-containing nanoparticles, an example being given in Fig. 2. Depending on the type of detector used, different kinds of size dependant information of the sample is achieved. One great advantage with FFF, compared to other fractionation methods, is that the retention time is directly proportional to nanoparticle physical properties. Retention in FFF is expressed as the retention ratio (R) given by
$$ R = \frac{{t^{0} }} {{t_{r} }} $$
(1)
where t0 is the void time and tr is the sample retention time. For highly retained components, R can be approximated by
$$ R \approx 6\lambda $$
(2)
while R can be estimated as follows for intermediate retention
$$ R = 6\lambda {\left[ {{\text{coth}}{\left( {\frac{1} {{2\lambda }}} \right)} - 2\lambda } \right]} $$
(3)
https://static-content.springer.com/image/art%3A10.1007%2Fs10646-008-0225-x/MediaObjects/10646_2008_225_Fig2_HTML.gif
Fig. 2

Representative FFF fractogram of a CdSe quantum dot using on-line fluorescence and ICP-MS detection

The fundamental retention parameter (λ) is defined as the mean distance of the component from the wall (l) divided by the channel thickness (w).

$$ \lambda = \frac{l} {w} = \frac{D} {{Uw}} $$
(4)

Channel thickness is calculated from experimentally determined channel volumes, since the actual channel thickness may differ from the manufacturer’s specifications. Estimates of λ from experimental determinations of R allow calculation of the diffusion coefficient (D). It is important to note that the fundamental measurement made by Flow FFF is the diffusion coefficient. In the techniques of Flow FFF, diffusion coefficients can be used to determine hydrodynamic diameter. In Sedimentation FFF buoyant mass or equivalent spherical diameter can be determined (Giddings 1993).

The most critical factor in Flow FFF analysis is the choice of membrane and the carrier composition optimization. The particles should travel through the fractionation channel in close vicinity to the membrane without aggregating, adsorbing to the membrane or having inter-particle repulsion. This is generally accomplished for the complex natural samples by controlling the electrostatic repulsion and steric stabilization by a combination of suitable ionic strength (typically 0–20 mM monovalent salt) and a surfactant (e.g., 0.05% sodium dodecyl sulphate) (Hassellöv et al. 2007). FFF has been successfully applied to a wide range of synthetic nanoparticles (e.g., SiO2, TiO2, ZrO2, Au, Ag, carbon black, pigments, Teflon, carbon nanotubes, soot particles) (Schimpf et al. 2000).

Another size fractionation method is size exclusion chromatography (SEC) where a particle or macromolecule mixture is passed through a column with a porous packing material with a distribution of pore sizes in the range of particles to be fractionated (Barth and Boyes 1992). The particles are separated according to their hydrodynamic volume (size and shape) by their ability to enter the porous structure of the packing materials. Particles that are larger enter pores to a lesser extent than the smaller particles. Each SEC column has a certain operating size (or molar mass) window, and the first eluting larger particles (all at once) are those outside the operating window, then come the fractionated particles and then the “salt peak” ions and molecules that have passed through the complete pore volume. Size exclusion chromatography has been applied to both carbon nanotubes and fullerenes, as described in a later section, to natural organic and inorganic nanomaterials (Perminova et al. 2003; Vogl and Heumann 1997; Jackson et al. 2005).

Hydrodynamic chromatography (HDC) is another size fractionation method that is carried out in narrow open capillaries, or in wider capillaries with non-porous packing materials that essentially form capillary routes. Due to the size, the center of mass cannot approach the walls infinitely and therefore a smaller particle can approach the wall to a larger extent than can a large one. Therefore, the elution order is the same as in SEC and also in the steric mode of FFF. The separation efficiency of HDC is very poor, but the operating size range is very good. HDC has been successfully applied for the fractionation of nanoparticles (Williams et al. 2002; Tiede unpublished results).

Chromatographic analyses of carbon nanoparticles

Many conventional techniques have been used to analyse fullerene solutions including UV–vis spectrophotometry, infrared spectroscopy, nuclear magnetic resonance, and mass spectrometry, frequently coupled to high performance liquid chromatography (HPLC) (Andrievsky et al. 2002; Fortner et al. 2005; Isaacson et al. 2007; Nowack and Bucheli 2007; Treubig and Brown 2002). For HPLC, octadecyl silane (ODS) stationary phases are most commonly selected with elution using solvents such as toluene or toluene:acetonitrile mixtures (Treubig and Brown 2002). When UV–vis absorbance detection is used, 325 nm is the wavelength typically selected. Alternatively, gel permeation chromatography can be used, for example using Agilent PL gel 10 μm 50 A with toluene elution (Readman and Frickers, unpublished data). Size exclusion chromatography has also been applied to characterise CNTs (Duesberg et al. 1998).

Light scattering techniques

Light scattering is a very commonly used method to determine particle size (Schurtenberger and Newman 1993). The electromagnetic radiation of the incident photons induces an oscillating dipole in the particle electron cloud. As the dipole changes, electromagnetic radiation is scattered in all directions. The light source could be laser light, X-rays or neutrons, each of which enables probing at different size ranges and particle compositions. Discussion will mainly be limited to describing methods utilizing laser light, since these are the most readily available methods to be used in particle characterization for ecotoxicology.

Dynamic light scattering

In dynamic light scattering (DLS), also called photon correlation spectroscopy or quasielastic light scattering, fluctuations in the scattered light that depend on particle diffusion is utilized. The fluctuations originate from the Brownian motion of the particles and from the fact that neighboring particles can have constructive or destructive interference of the scattered light intensity in a certain direction. In the DLS instrument the intensity is measured over very short time periods (δt) and then it is possible to compare (correlate) the intensity at time t0 with time t0 + δt (in the order of micro-milliseconds). Smaller particles (with faster diffusion) lose the correlation (the memory of their previous position) more rapidly than larger particles. The scattering intensity is plotted as an autocorrelation function:
$$ {\text{g}}{\left( \tau \right)} = {\left| {{\text{G}}{\left( \tau \right)} - {\left\langle {\text{I}} \right\rangle }^{2} /\gamma } \right|}^{{1/2}} = {\text{Ae}}^{{ - 2\Gamma \tau }} $$
(5)
where G(τ) is the field autocorrelation function, \( {\left\langle {\text{I}} \right\rangle }^{2} \) is the base line and γ is the coherence factor, expressing the efficiency of the photon collection. A is an instrument-specific constant, Γ is the decay rate and τ the delay time. Γ can be converted to the diffusion coefficient, D, using the relation:
$$ {\text{D}} = \Upgamma {\text{/q}}^{{\text{2}}} $$
(6)
where q is the wave vector, which can be described by the following relation:
$$ {\text{q}} = 4\pi \eta \sin {\left( {\pi /4} \right)}/\lambda $$
(7)
where η is the refractive index of the solvent and λ is the wavelength of the incident light. If the diffusion coefficient is known, the hydrodynamic radius, Rh, can be calculated from the Stokes–Einstein equation:
$$ {\text{R}}_{{\text{h}}} = {\text{kT}}/6\pi \eta {\text{D}} $$
(8)
where k is Bolzmann’s constant and T is the absolute temperature.

The advantages of DLS are: the rapid and simple operation, readily available equipment, and minimum perturbation of the sample (Ledin et al. 1994). The limitations are the interpretation, especially for polydisperse systems, and critical review of the data obtained (Filella et al. 1997). DLS gives an intensity weighted correlation function that can be converted to an intensity weighted (z-average) diffusion coefficient.

For d < λ/20, then the scattering intensity, I ∼ d6, according to the Rayleigh approximation, while for λ/20 < d > ∼λ then I ∼ d2 (Debye approximation). The strong particle size dependence of the scattering intensity will bias the measured size, as a small amount of large particles will have such a large influence that smaller particles will be neglected. Consider a sample with two particle sizes, d: 3 and 30 nm, of equal particle number concentrations. The volume concentration will be 1,000 times larger in the 30 nm particles due to the geometrical formula of a sphere, but according to the Rayleigh approximation, the scattering intensity will be 106 times stronger for the 30 nm particle compared to the 3 nm particle. For even larger particles, the response difference will be enormous. Consequently, even the smallest fraction of dust or other micrometer-sized particles will ruin the signal from the nanoparticles.

For multimodal size distributions (multi component mixtures), the conversion of the autocorrelation function to diffusion coefficient is an ill-posed mathematical problem, where small variations can give large deviations in the output. For this reason, but more importantly due to the fact that the signal from larger particles dominates over smaller ones, a general rule is that DLS is not suitable for samples with polydispersity index above ∼1.5–1.7.

Since DLS measures diffusion coefficients, and that all size calculations are based on assumptions that the Stokes–Einstein relation (Eq. 8) holds, it is essential to validate that the diffusion coefficient measured is the undisturbed self-diffusion coefficient. For charged nanoparticles, electrostatic forces between particles have an effect on the diffusive behavior. This effect is concentration dependant, and the upper boundary occurs when the nanoparticle gets entrapped by forces from their close neighbors, the point of so-called gel-formation. By dilution of the sample to the greatest possible extent, while remaining above the detection limit, and extrapolation of the measured diffusion coefficient to infinite dilution, the unperturbed diffusion coefficient can be estimated. This value is one that can most reliably be used to calculate size in the Stoke Einstein equation. However, dilution of a sample will change its diffusion behavior and aggregation state. If primary particle size is not the goal, but rather to characterize the dispersion state in a sample, then it is more relevant to not dilute the sample, reporting diffusion coefficients only, rather than size.

It should also be noted that the derived data from DLS are intensity based distributions or averages, and mathematical conversions to volume or number distributions should only be provided with good knowledge of the particle shapes, polydispersity and underlying assumptions (Finsy 1994). Although dynamic light scattering does not provide full characterization of nanoparticle dispersion, it is very valuable to, for example, monitor aggregation behavior.

Static light scattering

Static light scattering (SLS), also called multi angle (laser) light scattering (MALS or MALLS), provides measurement of physical properties that are derived from the angular dependency of light scattered by a particle. This is due to the fact that a particle of a certain size generates destructive and constructive interferences at certain angles. Time averaged scattering intensities are measured at several angles to derive any number of several size parameters including the particle size, root mean square radius of gyration (Rg.), which is the root mean square distance of point masses in a particle from its center of gravity. Consequently SLS relates to the particle structure and morphology and can therefore be used in combination with DLS to give information of particle shape factors. There are several important assumptions in SLS theory for different analytical solutions. The most used is called Rayleigh–Gans–Debye approximation (Schurtenberger and Newman 1993). For these approximations, the refractive index difference between the particle and solvent should be negligible, the concentration of particles approaches zero, and no light absorption by the particles occurs.

Both dynamic and static light scattering polydisperse samples impose limitations on these methods. Therefore, it has been shown to be beneficial to couple light scattering detectors online to a fractionation method such as FFF or SEC (von der Kammer et al. 2005b; Wyatt 1998). With this combination, independent size distributions can be derived from the two methods and thereby, from comparison of the two results, distributions of particle shape factors can be estimated (von der Kammer 2005).

Nephelometry

Turbidity, or nephelometry, is a particle concentration measurement that utilizes scattering of light at 90° or sometimes 180°, with respect to the light source. The light source can be a laser or monochromatic light. The equipment is very simple and can be portable or even in situ, but the relationship between the concentration and particle concentration is not trivial. The light scattering intensity is, as mentioned above, strongly dependent on particle size, and also on other parameters such as the refractive index difference between the particles and the suspension media. Therefore, in quantitative analysis, turbidity measurements should only be used for well-defined particles of fairly narrow size distributions and complemented by calibration with other techniques (e.g., gravimetry). For dispersed nanoparticles, turbidity is fairly insensitive, and is less suitable than for monitoring aggregation.

Nephelometry has also been used as a chromatographic particle concentration detector (von der Kammer et al. 2005a).

Laser induced breakdown detection

Laser induced breakdown detection (LIBD) is based on the fact that when a solid nanoparticle passes through the focal volume of a focused, pulsed laser, the power density required to induce breakdown of the dielectric properties of the water is lower than for pure water (Kim and Walther 2007). If the laser energy is correctly tuned, plasma formation will only occur when a nanoparticle passes through the focal volume of the optical cell. The plasma formation, or breakdown, is detected with either a piezo-electric crystal attached to the cuvette, or with a CCD camera synchronized with the laser pulse. The parameter measured is the breakdown probability (BP). Since BP for a given laser energy depends both on particle concentration and on size, it is necessary to elucidate both. The most common mode is to tune the laser pulse energy and measure the BP of the sample, and do the same for a set of calibration standards of known size at different concentrations. The BP for larger nanoparticles has a threshold (increased from zero probability) at lower laser energies than smaller nanoparticles. The BP-laser energy curves have different slopes depending on the concentrations that are also given from the calibration standards.

The main advantage of LIBD is that it is extremely sensitive even to small nanoparticles with detection limits in the ppt (ng dm−3) range. In fact, LIBD is so sensitive that most samples have to be diluted in order to not saturate the breakdown probabilities.

The main disadvantages are that LIBD cannot discriminate between different types of nanoparticles and even more seriously, that different nanoparticle compositions have different breakdown probabilities (instrument responses). Therefore it is not possible to use one set of calibration standards for different types of nanoparticles. LIBD is a specialized technique that is not yet commercially available.

Spectroscopic analysis and characterization

Certain classes of nanoparticles demonstrate strong fluorescence and this property is utilized in many fields such as medical imaging, immunoassays, photonics, amongst others (Bailey et al. 2004). Quantum dots (QDs) are composed of semi-conductor materials, for example CdSe, CdS, CdTe, and are highly fluorescent. These particles can be characterized by either their absorption or fluorescence emission spectra. The absorption spectra is broad over low wavelengths but displays a sharp peak, called the first exciton peak at the upper wavelength of the absorption spectra. This peak is generally in the order of 20–50 nm lower in wavelength than the emission peak. The position of this absorption peak can be correlated to the particle size and is commonly used to monitor size in QD synthesis (Yu et al. 2003). The emission peak tends to be fairly narrow, on the order of 50 nm, with the wavelength being highly sensitive to nanoparticle size. Measurement of fluorescence spectra can thus also be used to determine particle size. In natural systems, natural fluorophores contained in humic substances and biological cells may interfere with these determinations. Non-fluorescent nanoparticles such as silica can be labeled with dyes to impart fluorescence. In some cases the fluorescence of the dye can be enhanced by the presence of a second dye that can contribute its exciton energy through a radiation-less transfer.

Quantitation of particle concentrations can be performed using absorption or fluorescence if the optical constants of the particles are known. For example, extinction coefficients for the first exciton peak of some QDs were determined by Yu et al. (2003). It is yet to be determined how significantly background absorption from natural occurring materials in water will limit the usefulness of UV–vis absorption for nanoparticle quantitation in aquatic systems.

Both UV–vis absorption and fluorescence can be used as online detectors for chromatography and FFF systems. The extremely bright fluorescence of some nanoparticles should provide low detection limits for these techniques. Figure 2 shows an example of the use of online fluorescence detection with FFF for a CdSe quantum dot.

Fluorescence microscopy gives spatial information and has been very useful in looking at the distribution of nanoparticles in cells and organisms. For example, uptake of QDs into the guts of filter feeding organisms is clearly observable using fluorescence microscopy.

For naturally fluorescent material or labeled macromolecules, fluorescence correlation spectroscopy within the focal point of a laser confocal microscope, have been successfully applied to determine the diffusion coefficients of these materials (Lead et al. 2000b). The principle is similar to dynamic light scattering (also called photon correlation spectroscopy), but the sensitivity is much better for small (fluorescent) particles. The method should be very suitable for studies of QDs in environmental media.

In describing the UV–vis absorption spectra of metal NPs, the term surface plasmon is used, which describes the oscillating electron clouds present at the metal-solution interface. Particle size strongly affects the absorption spectra through quantum confinement effects that are important at the nanometer scale of materials. The smaller the particle size, the lower the wavelength of light absorbed. Aggregation of NP results in band broadening and red shifting of surface plasmon band and has been used to study the effect of electrolytes on metal NP stability (Aryal et al. 2006).

Particle shape characterization of metal NP is also possible from examination of surface plasmons. While spherical gold and silver NP have strong surface plasmon bands at about 520 and 400 nm respectively, nanorods of these metals show two bands, a red-shifted long-axis band and a blue shifted short-axis band. The wavelength of the long axis band is particularly sensitive to particle aspect ratio. It has also been noted that Au nanorods have 106 stronger fluorescence than spherical Au NPs (Link and El-Sayed 1999). Consequently, surface plasmon effects can be used to study particle-particle interactions since the aspect ratio changes when to single particles come close together.

Electron microscopy and atomic force microscopy

There are several powerful microscopy techniques that can provide images of nanoparticle systems as well as additional information on elemental composition, structure and even charges or force measurements. Microscopy methods are all single particle methods, that is the data does not arise from an ensemble of particles such as is the case with light scattering. This enables information to be collected on each particle free from interferences from other particles or background solutes. This gives good information on particle processes that sometimes cannot be obtained with bulk analysis (Mavrocordatos et al. 2007). However, it also means that even though a quantitative measurement with sometimes fairly good accuracy can be achieved on a single particle, it is only by counting and measuring enough particles (of a certain type or in a certain size range) that good enough counting statistics of the complete sample can be obtained. This is needed in order to deliver a quantitative analysis or characterization of the sample. Sizing with microscopy means that an average size measured on a certain number of particles are a number average, and in order to measure an accurate size distribution of nanoparticles it is necessary to count and measure thousands of particles in order to obtain a reliable counting statistics of the very few larger nanoparticles in the size distribution. The large particles in the distributions (or aggregates), even if very few, can contribute substantially to the volume or mass based distributions. In nanotechnology or material science this is not a problem since the particles to be measured are of the same type and are of similar size but when dispersed in water and mixed with natural organic matter and natural nanoparticles it is another story. Therefore we see a big need for automation in electron microscopy and development of “smart” image analysis software that enables characterization of the millions of particles needed in each sample (Mavrocordatos et al. 2007). With this said microscopy methods are very powerful for imaging and process understanding but it should be complemented with a particle population method that is giving quantitative information on the sample.

Another common feature for all microscopy techniques is that they require different levels of sample preparation. It ranges from the mildest being drying of the particles to a moist condition (AFM and ESEM) to a high-vacuum in SEM and TEM. In some methods coating or staining the sample is used. The transfer of the sample from its dispersed hydrated state to a dried high vacuum state often means that the particle size distribution changes dramatically. For example by evaporating a sample drop into dryness (a common method) the particle concentration and solute concentration increases drastically in the decreasing volume of the drop before it finally evaporates. This leads to aggregation of particles and precipitation of salts. Some methods are used to preserve the hydrated state of particles either by cryofixation, which is a rapid freezing so that the water forms non-crystalline ice. Another method is embedding the particles in some water-soluble resin that fixes the water when it cures.

The three most common sample preparation methods for natural colloids is drop deposition, adsorption deposition or ultracentrifugation harvesting, and the methods have been compared for AFM and electron microscopy respectively (Balnois and Wilkinson 2002; Mavrocordatos et al. 2007).

Scanning electron microscopy

In the family of electron microscopy techniques the sample is exposed to a high energy focused beam of electrons. In scanning electron microscopy (SEM) the interaction of the beam with the particle surface are scanned over the sample and measured as secondary electrons (most common), or backscattered electrons or X-ray photons. Due to the high depth of field in SEM a three dimensional appearance can be obtained. The sample needs to be conductively coated with gold or graphite and maintained under ultrahigh vacuum in order not to have the secondary electrons interact with gas molecules. The substrate is typically a filter membrane or a conducting grid.

Environmental scanning electron microscopy and related techniques

Due to the problems with morphological changes of the particles associated with the transfer to high vacuum state, environmental scanning electron microscopy (ESEM) was developed, where the sample cell is separated from the detector cell. This allows the sample to be measured under variable pressure and humidity (in theory up to 100%) with residual hydration water still on the particles. This water layer also serves as a conductor on the surface so the sample does not need to be conductively coated. The resolution is decreased (from ∼10 to ∼100 nm) due to the interactions of the secondary electrons with the water vapor molecules but there are less sample artifacts for example from natural colloids (Doucet et al. 2005). ESEM still allows analysis of the emitted X-rays. Wet STEM is a method for scanning TEM analysis of a wet sample on a TEM grid in an ESEM microscope utilizing dark-field imaging conditions with a resolution of a few tenths of nm (Bogner et al. 2005). A new sample capsule (WetSEM™) with electron transparent membranes provides an alternative to ESEM in ordinary SEM microscopes. The WetSEM capsules allow imaging under liquid or moist conditions (Thiberge et al. 2004). However, the loss of resolution is considerable (partly due to diffusion of the particles), and the membrane is sensitive to radiation damage, and only particles close to the membranes are in focus.

Transmission electron microscopy

In transmission electron microscopy (TEM) the electron beam is transmitted through a very thin specimen on a conducting grid (e.g., copper grid with a thin resin, e.g., formvar). After the beam has been transmitted through the sample and has interacted with the particles the non-absorbed electrons are focused onto an imaging detector (fluorescence screen or CCD camera). In TEM the particles are shined through by the electron beam and the absorbance (image contrast) is both a function of the electron density of the elements in a particle and the thickness of the particles. Organic matter with only light elements needs to be stained by a heavy metal cocktail in order to be visible.

High-resolution TEM is a method that can give subnanometer resolution and is used in material science to study atom-by-atom structure. HR-TEM is a very demanding and time-consuming method but it has been applied to detect nanoparticle formation by bacteria or in geochemical processes (Banfield and Navrotsky 2001; Suzuki et al. 2002).

TEM has also been applied to characterize carbon nanoparticle dispersions in ecotoxicological exposure experiments (Smith et al. 2007).

Electron microscopy microanalysis

For all electron microscopy methods mentioned here analysis of spectral patterns of emitted X-rays (K, L & M lines) for elemental composition of the particles can be utilized if the microscopes are fitted with an energy dispersive X-ray spectrometer (EDX or sometimes EDS). The spatial resolution can be even less than10 nm. The sensitivity is best for heavier elements, so in reality it works best for major elements of the particles and associated heavy metals in fairly high concentrations. The measurement uncertainty of EDX is generally ∼20% (Mavrocordatos et al. 2004, 2007).

Electron energy loss spectrometry (EELS) is another elemental composition method that can be applied in either spectrometric mode or in imaging mode in TEM. In EELS the loss of energies due to inelastic scattering processes (e.g., inner shell ionizations) can be interpreted to which elements that were causing the scattering. The energies lost are specific for each element. The EELS results are more difficult to interpret than EDX and works best for the lighter elements (from carbon and up to zinc). EELS can also be used to obtain additional chemical information (e.g., redox states of transition metals).

Atomic force microscopy

Atomic Force Microscopy (AFM) is a subnanometer resolution method in the family of scanning probe microscopy. It utilizes a cantilever with a very thin tip (tens of nm), that is oscillating over the surface of the sample. The oscillating movement (Z-axis) and the scanning over the surface (X and Y-axis) is controlled by piezoelectric actuators.

A laser-based balance can measure both repulsive (Pauli principle) and attractive (van der Waals) forces between the tip and the sample in the range 10−7 to 10−12 N. The occurrence of these forces at different stages of the cantilever oscillation can be used to derive a separating distance between the tip and the particles. The resulting images are an atomic force topography. The substrates that the particle samples are prepared on should be atomically flat (mica, graphite or silicon wafers are examples of suitable substrates). The preparation methods are typically drop deposition, adsorption deposition or ultracentrifugation as for electron microscopy, but in addition it is possible to analyze samples under moist conditions or even in liquids, which affords minimum perturbation. However, under liquid conditions the particles are sometimes attracted to the substrate (very weakly) and are moved around disturbing the images. Another feature in AFM is that the geometry of the tip compared to particle size gives that the tip is starting to “feel” the particle significantly before its center has approached the particle periphery, and analogously when the scanning tip is leaving the particle it feels the particle forces too long. Therefore the lateral dimensions are greatly overestimated, while the height measurements are very accurate. This should be kept in mind when interpreting AFM images, which means, e.g., a carbon nanotube can give a height of 1 nm but a width of up to 50 nm even though these should be the same. The geometry of the tip should be decreased if small particles are to be more accurately probed. The cantilever tip can be set to contact the particles but lateral forces lead to movement of particles, so a tapping mode or a non-contact mode has been developed to just feel the forces above the particles (Balnois et al. 2007). The latter has shown to be more accurate for soft, compressible particles such as humic acids. AFM is one of the most common nanometrology methods and has numerous applications (e.g., Lead et al. 2005; Viguie et al. 2007).

In Fig. 3, a dispersed ZnO nanopowder sample (with manufacturer stated size 50–70 nm) has been prepared with adsorption deposition and analyzed with AFM, TEM and SEM. The difference in visualization and size measurements is clear. AFM and TEM show sintered aggregates with primary particles in the size range provided by manufacturers, whilst SEM shows mainly larger flakes of material with some nanoparticles on top. It is likely that the sample preparation and vacuum-induced changes can explain these differences.
https://static-content.springer.com/image/art%3A10.1007%2Fs10646-008-0225-x/MediaObjects/10646_2008_225_Fig3_HTML.gif
Fig. 3

ZnO nanoparticle powder (50–70 nm, Sigma Aldrich UK), dispersed in distilled water (∼5 mg l−1), allowed to dry on silica and imaged by AFM (1a and b), TEM (2) and SEM (3) under standard conditions

Surface charge measurements

Colloidal nanoparticles develop surface charges in aqueous solutions. The net surface charge, or surface potential, is one of the most important nanoparticle characteristics since it describes to what extent the nanoparticle dispersion is electrostatically stabilized by interparticle repulsion. Consequently, ENP surface potential will have major influence on their fate and behavior (Guzman et al. 2006; Hunter and Liss 1979). However, it is not easy to directly measure the surface potential but there is a simple method that measures the so-called zeta potential, which is the potential at a hydrodynamic slipping plane in the electrostatic double layer of the particles as measured by electrophoresis. The measured electrophoretic mobility can be converted to z potential through Smoluchowski’s theories. The point of zero charge (PZC) is the pH where negative and positive charges are balanced, so there is no net charge on the nanoparticles. At PZC there is generally maximum aggregation taking place since the particles are allowed to come in close contact so that attractive van der Waals forces can act.

Surface area measurement

The Brunauer, Emmett, Teller (BET) (Brunauer et al. 1938) method is used to measure the specific surface area of solids, which involves drying of a powder in vacuum and then measuring (using a microbalance) the adsorption of dinitrogen gas (assumed as a monolayer) on the surface and in micropores. The BET method builds on the assumption that N2 has access to the complete surface of the particles. Other variants of this method based on adsorption of organic molecules (e.g., ethylene glycol monoethyl ether, EGME) can be used (Hassellöv et al. 2001). Dinitrogen gas gives higher surface areas than EGME, probably due to greater access to smaller pores.

Crystal structure

X-ray diffraction (XRD) is a method of measuring inter-particle spacings resulting from interference between waves reflecting from different crystal planes. It is used in mineralogy to determine crystal structure of mineral particles. For example XRD can be used to distinguish between the anatase and rutile and amorphous phases of TiO2 nanoparticles. A dry sample needs to be prepared as a thin film. Elemental composition of major elements can also be obtained although the sensitivity is low compared to other elemental analysis methods (e.g., ICPMS or AES).

It is also possible in TEM to measure the diffraction patterns of single particles using a method called “Selected area electron diffraction” (SAD, or SAED). In SAD the user can select an area of the sample with a small aperture and only the electron diffraction pattern from that area will be measured. This has benefits over XRD for heterogeneous samples because it allows single particle characterization.

Difference in analysis of particulate and nanoparticulate assemblages compared to conventional analysis of solutes

For analysis of nanoparticle assemblages by bulk analytical methods (in contrast to single particle analysis methods, e.g., microscopy) in whole samples or on fractions after sample treatment (e.g., filtration or Field-Flow Fractionation), it is necessary to recognize that for certain methods there may be differences compared to more common analysis of dissolved solutes (e.g., ions or molecules). In bulk analysis of a nanoparticle dispersion the analytes mass concentration are not homogeneously distributed, but rather as uniformly distributed point masses. This is not a problem providing the probed sample volume of the method is not approaching that of single nanoparticles. But when analyzing samples with environmentally relevant concentrations, with methods that are probing a very small samples volume (e.g., a very rapid measurement in a capillary or a fast flowing sample stream such as in mass spectrometers) the measurement may approach or enter a domain of single nanoparticle events. The consequence is a noisier signal and if there is statistically less than one particle per measurement then the recovery of the determination decreases, which gives an erroneous determination. Since the particle numbers (for the same mass) decreases rapidly for larger particles, this issue is more severe for them than for smaller particles. This is a well-known phenomena in e.g., ICPMS analysis of micrometer sized particles, and is called slurry nebulization. It needs to be considered when the number concentration is low. Other problems may be non-quantitative measurement of the particles, for example through incomplete atomization in elemental analyses or non-transparent or shading effects in spectroscopy. For slurry nebulization in ICP-AES or ICP-MS, it has been found that the particle size is the dominating factor to obtain complete atomization, where particles below 3–5 μm have been found to yield quantitative recoveries compared to solutions (Ebdon et al. 1997; Santos and Nobrega 2006). The main reason for decreasing recoveries was poor transport efficiencies in the nebulizer-spraychamber system. This implies that for nanoparticles, incomplete atomization should not be a problem but maybe for aggregates particularly refractory materials such as carbides and some oxides may also present problems.

Validation, measurement uncertainty and good laboratory practices

In metrology and analytical chemistry, it is fundamental to be able to report on the traceability of the acquired results. Calibration standards used for quantification are generally traceable to a primary national or international standard. However, for nanoparticles the validity of these standards has a shorter lifetime than most other standards and is more sensitive to operating conditions. Nanoparticle standards, or reference materials, exist both as suspensions and as powders. Nanoparticle standards in suspension are generally labeled with expiry dates and instructions for storage. Sometimes there are also instructions on how to further dilute the standard in order to maintain its integrity. The use of powdered nanoparticle standards does not include a standardized procedure for dispersion of the nanoparticles. To make the dispersion in each individual laboratory increases the uncertainty of the original metric stated by the manufacturer. Indeed, many metrics (e.g., size distribution) are strongly dependant on how the dispersion was made and in which media (pH, ionic strength and composition and presence of organic matter).

In addition to the nanometrology specific issues of method validation relates to the normal quality control (QC) of any analytical method (Table 5). The most important steps in analytical QC are method validation and quantification of measurement uncertainty. The method validation is simply an experimental procedure to determine that the method and procedures (standard or in-house developed) are complying with the documented specifications (e.g., limit of detection, linearity, determination of precision and accuracy and robustness). One way of determining the accuracy is to use a certified reference material (CRM) of the same type as the samples and with documented property values within the range of the method (Table 5). CRMs or NIST traceable size standards are however very rare for nanoparticles as yet. There exist reference materials with certified sizes for gold and polystyrene colloids in the nanometer size range. More reference materials are under development through international efforts. Testing the homogeneity, shelf life of a reference material and carrying out all the analysis in order to certify the material is very elaborate and expensive. In the absence of CRMs there is also the possibility to use non-certified materials to (test materials) to benchmark analytical procedures and toxicity testing (Aitken et al. 2007).

Another option is to participate in interlaboratory comparisons were a blind sample is sent to many laboratories for analysis, thereby affording a good indication of accuracy and precision in the results. Interlaboratory comparisons are not yet as common in nanometrology as in conventional analytical chemistry where rigorous quality assurance protocols are followed in order to achieve and maintain certified accreditation. There are, however, a few examples of informal interlaboratory comparisons on natural nanoparticles (Lead et al. 2000a) and on engineered nanomaterials (Breil et al. 2002) which have proved highly informative to the participants and for other users of the same methodologies. A good daily routine is to analyze a QC sample and plot that value into a control diagram to monitor measurement uncertainty between interlaboratory comparisons. The QC sample should be a sample that is stable over time and that is as similar to the usual samples as possible. Thus can method or instrument related problems in the laboratory can be easily and quickly discovered.

Good laboratory practices in characterization of exposure/effect experiments should include minimal sample perturbation and determination of the dispersion-agglomeration state. Dynamic light scattering fulfills these criteria, is a simple measurement to perform, and is available in most academic institutions. It is also a simple measurement to perform. However, for the reasons described previously, the results from DLS should not be over interpreted. DLS is primarily not a size determination method as it measures scattering intensity weighted diffusion coefficients. Thus, it is well suited to follow initial stages of aggregation, but not to provide nanoparticle sizes. For toxicity tests of nanoparticles, we suggest to conduct a separate dispersion experiment under optimum conditions as a reference to the dispersion behavior in the effect media and during the course of the effect experiment. This reference experiment with maximum dispersion may include surfactants, co-solvents, certain ionic strength and sonication. By comparing the results in the realistic effect/exposure experiments with this reference experiment one can obtain information on the degree of aggregation.

If competence and equipment is available a less biased (but with slightly more perturbation) determination of the size distribution can be achieved using e.g., Field-Flow Fractionation. Microscopy (e.g., AFM, SEM or TEM) is very powerful in imaging nanoparticles and aggregates, but the aggregation state of the sample may have changed during sample preparation.

Acknowledgements

Hassellöv thanks the Swedish Environmental Research Council FORMAS and University of Gothenburg Nanoparticle platform for financial support. J. Readman acknowledges partial support of his contribution through the UK Natural Environment Research Council Environmental Nanoscience Initiative (Grant Reference Number: NE/E014321/1). Ranville acknowledges partial support through EPA STAR Grant RD-83332401-0

Copyright information

© Springer Science+Business Media, LLC 2008