Ultraviolet and Visible Spectroscopy

Easy read Ultraviolet and Visible Spectroscopy in 2021

Know about the principles instrumentation and applications of Ultraviolet and Visible Spectroscopy

   Ultraviolet-visible spectroscopy (UV-Vis) alludes to absorption spectroscopy or reflectance spectroscopy in the ultraviolet-visible spectral region. It uses light in the visible and neighbouring (Near-UV and near-infrared) ranges. The absorption or reflectance in the visible spectrum legitimately influences the apparent shade of the synthetic concoctions included. In this area of the electromagnetic spectrum, atoms experience electronic transitions.

This method is reciprocal to fluorescence spectroscopy, in that fluorescence manages transitions from the excited state to the ground state, while absorption estimates transitions from the ground state to the excited state. This spectroscopy is widely utilised and gave a significant effect on progress in the biochemical and analytical examination. Biomolecules recuperated from column chromatography, enzyme assays, and density gradient centrifugation depends intensely on UV-Vis spectroscopy.

Principle

          The majority of the electrons in a particle are in-ground state, and when a compound absorbs UV (200-400 nm) or visible (400-700 nm) radiations, both the bonding and non-bonding outer electron undergo excitation from lower vitality level (ground state) to higher vitality level (excited state). This technique exploits the property of the compound to ingest lights of specific wavelengths which characterises its absorption spectra. The UV-Visible spectrum of a compound gives pieces of information concerning the ground state and the following higher excited state because the good transitions are happening to determine the wavelength of absorbed light — the absorption peaks, identified with deciding the sub-atomic structures.

Instrumentation

          A spectrophotometer essentially has the following parts:

Ultraviolet and Visible Spectroscopy
Ultraviolet and Visible Spectroscopy Image Courtesy: Thank you- https://upload.wikimedia.org

  1. Light Source

       A UV spectrophotometer has two light sources, a tungsten lamp for visible light and a hydrogen or deuterium lamp, and the former gives more extensive and intense light than the latter. This polychromatic light reflected through a plane mirror passes through an entrance slit and a condensing lens and falls on a monochromator. The monochromator disperses the light, and the desired wavelength is focused on the exit slit.

  1. Monochromators

      The monochromators producing radiations of a single wavelength are based either on refraction by a prism or by diffraction by a grating. Prisms are made of glass for the visible region and quartz of silica for the UV region. The resolving power of the grating is directly proportional to the closeness of the lines present in the grating. They are superior to prisms as they yield a linear resolution of the spectrum.

  1. Cuvettes

      The cuvettes are optically transparent made of glass, plastic, silica or quartz. Glass and plastic absorb UV light below 310 nm and hence cannot be used. Silica and quartz do not absorb UV light, and thus they are used for both visible and UV spectrophotometers, and they transit radiation to 110 nm. The standard cuvettes are made of quartz, have an optical path of 1cm and hold a volume of 1 to 3 mL. Microcuvettes – 0.3 mL to 0.5 mL are used for the measurement of expensive chemicals.

  1. Photomultiplier tubes

       A photoelectric device converts light energy into electrical energy which is then amplified, detected and recorded. The photomultiplier tube has a cathode with a photoemissive surface and a wire anode. It also has nine additional cathodes called dynodes. The electrons emitted, from the photoemissive cathode strikes the dynodes which emit several additional electrons. These electrons are accelerated towards dynode two, which again emits electrons, and the amplified electrons flow to the anode generating a much larger photoelectric current than the photocell.

Applications

  • UV-Visible spectroscopes are unmistakably progressively refined instruments and give preferable goals and exactness over a colourimeter.
  • It is utilised to evaluate the concentration of coloured and colourless solutions, which could retain light.
  • Because of its higher susceptibility, it is used to determine even minimal quantities in the matter.
  • It, for the most part, does not corrupt or change the substance under examination and henceforth can be recuperated or reused.
  • It is utilised to discover the absorption maxima of compounds with a broad scope of wavelengths.
  • It additionally empowers to pursue the subtleties of reactions and dynamic enzyme energy.
  • It is additionally used to decide the development of bacteria and yeast and decide the number of cells in culture.
  • Minimal volumes such as 300 µL, utilised for the estimation of precision samples.

Now that you have read about Ultraviolet and Visible Spectroscopy. Check out our course on Ampersand Academy Read this interesting article about Fourier Transform Infrared Spectroscopy.

Fourier Transform Infrared Spectroscopy

Easy read on Fourier Transform Infrared Spectroscopy (FT-IR) in 2021

Know the principles instrumentations of Fourier Transform Infrared Spectroscopy (FT-IR)

Fourier Transform Infrared Spectroscopy

FT-IR represents Fourier Transform InfraRed, the favoured strategy for infrared spectroscopy. In infrared spectroscopy, IR radiation went through a sample. A portion of the infrared radiation is absorbed by the sample, and some of it is gone through. The subsequent spectrum speaks to the molecular assimilation and transmission, making a unique molecular mark of the sample. Like a unique finger impression, no one of kind molecular structure produces a similar infrared spectrum. It makes infrared spectroscopy valuable for a few sorts of investigation. The accompanying data that we can get from FT-IR.

•        It can distinguish obscure materials.

•        It can decide the quality or consistency of a sample.

•        It can decide the number of components in a mixture.

Need for FTIR

Fourier Transform Infrared spectroscopy was created to defeat the restrictions experienced with dispersive instruments. The principle trouble was the moderate filtering process — a strategy for estimating the entirety of the infrared frequencies at the same time, as opposed to independently, as required. An answer was created which utilised an essential optical gadget called an interferometer. The interferometer creates a one of a kind sort of sign which has the entirety of the infrared frequencies “encoded” into it. The sign can be estimated rapidly, generally on the request for one second or something like that. Along these lines, the time component per sample is diminished to a matter of a couple of moments as opposed to a few minutes.

             Most interferometers utilise a beamsplitter that takes the approaching infrared bar and partitions it into two optical shafts. One pillar reflects off a level mirror which is fixed set up. The other pillar reflects off a level mirror which is on a component that enables this mirror to move an exceptionally short separation commonly a couple of millimetres from the beamsplitter. The two pillars reflect off their particular mirrors and are recombined when they meet back at the beamsplitter.

Since the way that one bar voyages is a fixed length and the other is persistent as its mirror moves, the sign which leaves the interferometer is the consequence of these two shafts ” interfering” with one another. The following sign is called an interferogram which has the exclusive property that each datum point that makes up the sign has data about each infrared recurrence that originates from the source.

This implies as the interferogram is estimated, all frequencies are being estimated at the same time. Therefore, the utilisation of the interferometer brings about fast estimations. Since the examiner requires a recurrence spectrum to make a distinguishing proof, the deliberate interferogram sign can not be deciphered directly. A method for “decoding” the individual frequencies is required. This can be cultivated through an outstanding scientific system called the Fourier transformation. This transformation is performed by the PC, which at that point gives the client the desired spectral data for investigation.

Instrumentation 

The instrumentation procedure is as per the following:

  • Source

          Infrared vitality is produced from a gleaming dark body source. This bar goes through a gap that controls the measure of vitality displayed to the sample

  • Interferometer

            The pillar enters the interferometer where the “Spectral encoding” happens. The subsequent interferogram signal at that point leaves the interferometer.

  • Sample

           The pillar enters the sample compartment where it is transmitted through or reflected off of the surface, contingent upon the kind of investigation being practised. This is the place specific frequencies of vitality, which are interestingly normal for the sample, are absorbed.

  • Detector

             The bar at long last goes to the detector for conclusive estimation. T/he detectors utilised are exceptionally intended to gauge the specific interferogram signal.

  • Computer

            The deliberate sign is digitised and sent to the computer where the Fourier transformation happens. The last infrared spectrum is then introduced to the client for translation and any further control.

              Since there should be a relative scale for the assimilation force, a foundation spectrum should likewise be estimated. This is ordinarily an estimation with no sample in the pillar. This can be contrasted with the estimation with the sample in the bar to decide the ” Per cent transmittance”.

This system brings about a spectrum that has the entirety of the instrumental qualities expelled. In this way, all spectral highlights which are available are careful because of the sample. A solitary foundation estimation can be utilised for some sample estimations since this spectrum is typical for the instrument itself.

Fourier Transform Infrared Spectroscopy
Fourier Transform Infrared Spectroscopy

Image Courtesy: Thank You- https://image2.slideserve.com

Advantages of FTIR

  • Speed
  • Sensitivity
  • Mechanical Simplicity
  • Internally Calibrated

Now that you have read about Differential Scanning Calorimetry. Check out our course on Ampersand Academy Read this interesting article about Differential Scanning Calorimetry.

Differential Scanning Calorimetry DSC

Easy read Differential Scanning Calorimetry (DSC) in 2021

know a short simple description about the instrument Differential Scanning Calorimetry (DSC)

Differential scanning calorimetry is a system wherein the heat flux to the example is checked against time or temperature while the temperature of the sample, in a predetermined air, is customised. Practically speaking, the distinction in heat flux to a pan containing the example and an empty pan is checked. The instrument utilised is a differential scanning calorimeter or DSC. The DSC is economically accessible as a power-compensating DSC or as a heat-flux DSC.

Principle

When a sample experiences a physical change, for example, a stage transition, pretty much heat should stream to it than to the reference (commonly an empty sample pan) to keep up both at a similar temperature. Regardless of whether a higher amount of less heat must stream to the sample relies upon whether the procedure is exothermic or endothermic. For e.g.as an active sample melts to the fluid, it will require more heat streaming to the sample to expand its temperature. At a similar rate as the reference.

This is because of the retention of heat by the sample as it experiences the endothermic stage transition from strong to fluid. In like manner, as the sample undergoes exothermic processes. (For example, crystallisation) Less heat is required to raise the sample temp. By watching the distinction in heat stream between the sample and reference, DSC can quantify the measure of heat retained or discharge during such transition.

  Heat Flux DSC 

Heat Flux DSC contains the sample and reference holder, the heating resistor, the heat sink, and the heater. The heat of the heater is provided into the sample and the reference through the heat sink and heat resistor. Heat stream is corresponding to the heat distinction of heat sink and holders. The heat sink has enough heat limit contrasted with the sample. If the sample happens endothermic or exothermic marvels, for example, transition and response, these endothermic or exothermic wonders are repaid by a heat sink.

Therefore, the temperature distinction between the sample and the reference is kept steady. The distinction between the measure of heat provided to the sample and the reference is relative to the temperature contrast of the two holders. By adjusting the standard material, the obscure sample quantitative estimation is reachable.

Differential Scanning Calorimetry DSC
Differential Scanning Calorimetry. Image courtesy: Kodre et al., 2014

Power Compensation DSC

 A system where the distinction of thermal energy that is applied to the sample and the reference material per unit of time is estimated as a component of the temperature to even out their temperature, while the temperature of the sample unit, framed by the sample and reference material, is shifted in a predetermined program. The power-compensating DSC has two about indistinguishable (as far as heat misfortunes) estimating cells, one for the sample and one reference holder.

The two cells are heated with independent heaters, and their temperatures are estimated with isolated sensors. The temperature of the two cells can straightly differ as an element of time being constrained by a usual temperature control circle. A second-differential-control circle modifies the power contribution when a temperature distinction begins to happen because of some exothermic or endothermic procedure in the sample. The differential power signal is recorded as an element of the actual sample temperature.

Image courtesy: Kodre et al., 2014

Typical DSC Curve

     The result of a DSC analysis is a curve of heat flux versus temperature or versus time. This curve can be utilised to compute enthalpies of transitions, which is finished by coordinating the pinnacle comparing to a given transition. The enthalpy of transition can be communicated utilising equation:

ΔH = KA

Where, ΔH = enthalpy of transition,

     K= calorimetric constant,

     A= area under the peak.

        Image courtesy: Kodre et al., 2014

Calorimetric consistent changes from instrument to instrument and can be controlled by breaking down a well-described material of known enthalpies of transition. The zone under the peak is legitimately corresponding to heat consumed or developed by the reaction. The height of the peak is directly proportional to the rate of the reaction.

The Factors Affecting DSC Curve are as follows

Instrumental factors

• Furnace heating rate

• Recording or chart speed

• Furnace atmosphere

• The geometry of the sample holder/location of sensors

• The sensitivity of the recording system

• Composition of sample containers

Sample characteristics

• Amount of sample

• Nature of sample

• Sample packing

• The solubility of evolved gases in the sample

• Particle size

• Heat of reaction

• Thermal conductivity

Applications

  • Determination of Heat Capacity
  • The Glass Transition Temperature
  • Crystallisation
  • Melting
  • Drug analysis
  • Polymers
  • Food science

Now that you have read about Differential Scanning Calorimetry. Check out our course on Ampersand Academy Read this interesting article about Nanotechnology & Nanomedicine.

Nanotechnology & Nanomedicine

Easy read on Nanotechnology & Nanomedicine in 2021

Nanotechnology & Nanomedicine – a detailed overview

Nanotechnology, defined as the multidisciplinary field that aims to control matter at the atomic and molecular levels. This word was used for the first time by Professor Norio Taniguchi in 1974, even though ideas and concepts behind nanoscience had already been theorised in the famous lecture given by the Nobel prize physicist Richard Feynman during the American Physical Society meeting at the California Institute of Technology (CalTech) on December 29th, 1959. [1, 2]

In his speech, entitled “There’s Plenty of Room at the Bottom,” Feynman already supported the idea that “When we get to the very, very small world…we have a lot of new things that would happen that represent completely new opportunities for design.” [3] Nonetheless, the first applications of modern nanotechnology were accomplished only in the ’80s of the last century with the invention of two instruments, Scanning Tunneling Microscope (STM) and Atomic Force Microscope (AFM). These were essential tools that paved the way for imaging at the atomic level and manipulation of individual atoms. [1] 

    The main advantages of nanostructures are their small size and the high surface to volume ratio, which implies high packing density and strong lateral interactions. The fundamental difference with microstructures relies on the physical behaviour of nanostructures, whose magnetic and electric properties are determined by quantum mechanics. [4] The mentioned features make nanotechnology suitable for applications in any field, ranging from the energy sector to the environmental one, passing through electronics and food production and ending with cosmetics.

Among the areas of interest, a fundamental role of nanotechnology lies in medicine, which could be easily expected since the main components of the living cells are at the nanoscale level. [1] Nanomedicine, defined as “the science and technology of diagnosing, treating, and preventing disease and traumatic injury, of relieving pain, and of preserving and improving human health, using molecular tools and molecular knowledge of the human body” by the Medical Standing Committee of the European Science Foundation (ESF) in 2004. The five main sub-areas of nanomedicine: analytical tools, nanoimaging, nanomaterials and nanodevices, novel therapeutics and drug delivery systems, and clinical, regulatory, and toxicological issues. [5]

    Among the mentioned applications, a key role is played by the formulation of drugs at the nanoscale. This kind of delivery ensures higher drug stability, decreased clearance rates, and longer circulation time, improved solubility, and the possibility of higher selectivity, thanks to active targeting through functionalization of the carriers. [6, 7] However, large companies are sometimes unwilling to invest in the research and development of nanomedicines because of concerns on their safety and uncertainties about the regulation to be applied to this kind of technique. [8, 9] Nonetheless, since the first nanotherapeutic gained clinical approval in 1995, 50 other nanoparticle-based drugs have entered clinical practice in the following two decades. [7] 

Nanotechnology & Nanomedicine
Nanotechnology & Nanomedicine

Steps that a drug must undergo before entering the market: from the discovery to the registration approval passing through formulation optimisation and several clinical trials. On average, for ~10´000 compounds evaluated in preclinical studies, about five compounds enter clinical trials and only one compound finally receives regulatory approval by the US Food and Drug Administration (FDA). In the U.S. the meantime from the synthesis of a new compound to marketing approval is 14.2 years. (Thanks to Schio et al., 2017)

    While a significant part of approved nanomedicines is nanoformulations of already existing drugs, increasing interest is also located in nanoformulations of cancer chemotherapies, which are delivered in highly toxic solvents in classical medicine, as well as in antimicrobial therapy. Nevertheless, one of the most exciting aspects is the possibility to translate novel therapies like nucleic acids (including siRNA, antisense RNA, shRNA, miRNA, gene delivery) into clinical applications. [7] In particular, the possibility of altering gene expression in vivo by miRNA selective delivery for therapeutic purposes is a challenging task; the development of suitable nanocarriers for miRNA delivery is, in fact, a hot topic, which has raised increasing attention in the last two decades.

    Another of the fields, as mentioned above of nanomedicine that is currently under development is the use of nanotechnologies in diagnostics and imaging. Molecular imaging allows the measurement of biological processes at the cellular or molecular level. It includes several techniques, like optical bioluminescence, optical fluorescence, targeted ultrasound, molecular magnetic resonance imaging (MRI), magnetic resonance spectroscopy (MRS), single-photon-emission computed tomography (SPECT), and positron emission tomography (PET). The advantages in using nanoparticles (NPs) instead of single molecule-based contrast agents rely on an improvement of the image contrast, longer circulation time and the possibility of carrying higher payloads. [1, 12]

    A great novelty brought by nanotechnology in health is a combination of the two applications just described. NP-based imaging and therapy have been separately investigated as just explained, but the use of multifunctional nanoplatforms allowed the simultaneous delivery of therapeutic, targeting and imaging agents. This new discipline is called theranostics, which is indeed a new term coined for NPs used for simultaneous diagnosis and treatment, and it can be attributed initially to Funkhouser approximately ten years ago. [13-16]

Schematic illustration of cancer diagnosis and treatment employing nanoparticles. It is an example of how theranostic nanodevices work. (Thanks to Ahmed et al., 2012). Nanotechnology & Nanomedicine

Theranostics significantly contributed to the growing field of personalised medicine. These dual-purpose nanomaterials used for contemporaneous diagnosis and therapy can provide hints on the site of accumulation of the therapeutic agent, either controlling its ability to reach the given target or the ability to avoid accumulation in potentially healthy tissues. This possibility of drug monitoring allows the establishment of the right doses, recognition of negative side effects at the early stages of therapy and real-time monitoring of the therapeutic response of the patient. [14, 16- 18] 

References:

[1] S. Kargozar and M. Mozafari, “Nanotechnology and Nanomedicine: Start small, think big,” Materials Today: Proceedings, no. 5, p. 15492–15500, 2018.

[2] U. S. N. N. Initiative, “National Nanotechnology Initiative,” [Online]. Available: https://www.nano.gov. [Accessed October 2018].

[3] R. P. Feynman, “Plenty of Room at the Bottom,” in American Physical Society meeting, Pasadena, 1959.

[4] G. Whitesides and P. Alivisatos, “Fundamental Scientific Issues for Nanotechnology.,” in Nanotechnology Research Directions: IWGN Workshop Report. Dordrecht, Springer, 2000, pp. 1-24.

[5] T. J. Webster, “Nanomedicine: what’s in a definition?” International Journal of Nanomedicine, vol. I, no. 2, pp. 115-116, 2006.

[6] I. Fernandez-Piñeiro, I. Badiola and A. Sanchez, “Nanocarriers for microRNA delivery in cancer medicine,” Biotechnology Advances, no. 35, pp. 350-360, 2017.

[7] J. M. Caster, A. N. Patel, T. Zhang and A. Wang, “Investigational nanomedicines in2016: a review of nanotherapeutics currently undergoing clinical trials,” WIREs Nanomed Nanobiotechnol, vol. 9, 2017.

[8] L. Jin, X. Zeng, M. Liu, Y. Deng and N. He, “Current Progress in Gene Delivery Technology based on Chemical methods,” Theranostics, vol. 4, no. 3, pp. 240-255, 2014.

[9] Boisseau and B. Loubaton, “Nanomedicine, nanotechnology in medicine,” C. R. Physique, vol. 12, p. 620–636, 2011.

[10] L. Schio, Strategies for Drug Discovery: New paradigms in Oncology & eADMET properties optimization Perspectives, ENSCP Paris, 2017.

[11] J. K. Willmann, N. van Bruggen, L. M. Dinkelborg and S. S. Gambhir, “Molecular imaging in drug development,” Nature Reviews Drug Discovery, vol. 7, July 2008.

[12] W. Cai and X. Chen, “Nanoplatforms for Targeted Molecular Imaging in Living Subjects,” Small, vol. 3, no. 11, pp. 1840-1854, 2007.

[13] S. M. Janib, A. S. Moses and J. A. MacKay, “Imaging and drug delivery using theranostic nanoparticles,” Advanced Drug Delivery Reviews, vol. 62, pp. 1052- 1063, 2010.

[14] J. Xie, S. Lee and X. Chen, “Nanoparticle-based theranostic agents,” Advanced Drug Delivery Reviews, vol. 62, pp. 1064-1079, 2010.

[15] T. L. Doane and C. Burda, “The unique role of nanoparticles in nanomedicine: imaging, drug delivery and therapyw,” Chem. Soc. Rev., vol. 41, p. 2885–2911, 2012.

[16] N. Ahmed, H. Fessi and A. Elaissari, “Theranostic applications of nanoparticles in cancer,” Drug Discovery Today, vol. 17, no. 17/18, 2012.

[17] J. H. Ryu, S. Lee, S. Son, S. H. Kim, J. F. Leary, K. Choi and I. C. Kwon, “Theranostic nanoparticles for future personalized medicine,” Journal of Controlled Release, vol. 190, pp. 477-484, 2014.

[18] L. Y. Rizzo, B. Theek, G. Storm, F. Kiessling and T. Lammers, “Recent progress in nanomedicine: therapeutic, diagnostic and theranostic applications,” Current Opinion in Biotechnology, vol. 24, pp. 1159-1166, 2013.

Now that you have read about Nanotechnology & Nanomedicine. Check out our course on Ampersand Academy Read this interesting article about Thermogravimetric Analysis.

thermogravimetric analysis

Easy read Thermogravimetric analysis in 2021

Know in detail about the instrument thermogravimetric analysis

Prepared by Dr. J . Helan Chandra

Thermogravimetric analysis (TGA) is one of the thermal methods in which the change in weight of the sample kept under vacuum is measured over time with respect to change in temperature.

Principle

It is performed by escalating the temperature of a sample kept inside the furnace gradually and the mass is measured with the balance which remains outside of the furnace.

Mechanism of change in mass:

In TGA, weight loss is observed may be due to decomposition (Breaking Chemical bonds), evaporation (Volatilization/ vaporization/ Sublimation) reduction and desorption. A gain in weight may be due to oxidation and adsorption/absorption.

Application

  1. Characterization study
  • To measure thermal stability
  • To identify the purity of the sample
  • To determine the moisture content of the sample
  • To determine the transition temperature
  1. Examination study
  • To study kinetics for the sample disintegration
  • To analyse corrosion of the sample with respect to oxidation
  • Analysis of Thermo-stable polymers

Types of TGA

It is classified into three types.

  1. Isothermal or static thermogravimetry- In this technique, at a constant temperature, Change in weight of the sample is measured with respect to change in time.
  2. Quasistatic thermogravimetry- In this technique, the sample is heated to a constant weight each of series of increasing temperatures.
  3. Dynamic thermogravimetry- In this technique, the sample is heated in an environment whose temperature is changed in a linear manner.

Factors affecting the TG curve

Instrumental factors                   Sample Characteristics

Furnace heating rate                   Weight of the sample

Furnace atmosphere                    Sample particle size

Instrumentation

thermogravimetric analysis
thermogravimetric analysis
  1. Furnace: Temperature of the sample in the holder kept in the furnace is raised slowly. Change in temperature of the sample and the corresponding weight is taken in order to obtain the thermogram.
  2. Thermobalance: Sample holder kept inside the furnace is attached with the thermocouple to observe the change in weight and temperature.
  3. Recorder: Thermogram is obtained by recording the change in weight on Y-axis and temperature on X-axis

TGA of Calcium oxalate monohydrate (CaC2O4.H2O)

• The successive plateau corresponds to the formation of anhydrous salt, calcium

carbonate and calcium oxide.

(1) CaC2O4.H2O → CaC2O4 + H2O

(2) CaC2O4 → CaCO3 + CO

(3) CaCO3 → CaO + CO2

• The thermogram indicates that the loss of water begins at 100°C and loss of CO at 400°C and CO2 at 680°C.

 thermogravimetric analysis
thermogravimetric analysis

Now that you have read about Thermogravimetric analysis. Check out our course on Ampersand Academy Read this interesting article about Thin Film Deposition.

thin film deposition

Easy read Thin Film Deposition and its types in 2021

Know in detail about the thin film deposition its various types and applications

Thin Film Deposition

The action of applying a thin film to a surface is known as a thin-film deposition. Thin-film deposition is any procedure for depositing a thin film of material onto a substrate or onto formerly deposited layers. “Thin” is a relative word, but most deposition procedures allow layer thickness to be structured within a few tens of nanometers, and molecular beam epitaxy allows single layers of atoms to be deposited at a time.

It is useful in the production of optics (for reflective or anti-reflective coatings, for instance), packaging (Aluminium-coated PET film), electronics (layers of insulators, semiconductors, and conductors form integrated circuits), and in contemporary art. Similar processes are sometimes used where the thickness is not essential, for example, the deposition of silicon and supplemented uranium by a CVD-like process after gas-phase processing and the refinement of copper by electroplating.

thin film deposition
Thin Film Deposition. Thanks intechopen.com

Deposition procedures fall into two broad classes, depending on whether the process is primarily chemical or physical.

Chemical Deposition

Here, a fluid precursor undergoes a chemical transformation at a solid surface, leaving a solid layer. An everyday example is the development of soot on an object when it is placed inside a flare. Since the fluid surrounds the solid thing, deposition ensues on every exterior, with little regard to direction; thin films from chemical deposition procedures tend to be conformal, rather than directional. 

Types of Chemical Deposition

Chemical solution deposition (CSD) aids a liquid precursor. Usually, a solution of organometallic powders liquified in an organic solvent. Plating depends on liquid precursors, often a solution of water with a salt of the metal to be deposited. Some plating processes are impelled entirely by reagents in the solution (usually for noble metals). However, by far, the most commercially important method is electroplating.

It was not generally used in semiconductor processing for many years but has seen a resurgence with the more widespread use of chemical-mechanical polishing procedures. This is a relatively economical, simple thin-film process that can produce stoichiometrically accurate crystalline phases.

Chemical vapour deposition (CVD) commonly employs a gas-phase precursor, habitually a hydride or halide of the element to be deposited. For instance, MOCVD, an organometallic gas, is utilised. Industrial procedures often use extremely low pressures of precursor gas.

Plasma enhanced CVD (PECVD) uses an ionised vapour, or plasma, as a precursor. Unlike the soot example above, commercial PECVD depends on electromagnetic potentials such as microwave excitation, electric current rather than a chemical reaction, to produce a plasma.

Plasma enhanced CVD. Thanks, https://www.researchgate.net/publication/270671650

Physical Deposition

Physical deposition uses mechanical or thermodynamic means to produce a thin film of solid. An everyday instance is the formation of frost. Since most engineering resources are held together by relatively high energies, and chemical reactions are not used to store these energies. Commercial physical deposition methods tend to require a low-pressure vapour background to function properly; most can be categorised as physical vapour deposition (PVD).

The material to be deposited is engaged in an active, entropic environment so that particles of material escape its surface. Facing this source is a chiller that draws energy from these particles as they arrive, allowing them to form a solid layer. The whole method is kept in a vacuum deposition chamber, to allow the particles to travel as spontaneously as possible. Meanwhile, particles tend to follow a straight path, films deposited by physical means are commonly directional, rather than conformal.

Types of Physical Deposition

A thermal evaporator employs an electric resistance heater to melt the substance and raise its vapour pressure to a suitable range. This is done in an elevated vacuum, and both let the vapour to arrive at the substrate without changing with or scattering against other gas-phase atoms in the chamber and reduce the amalgamation of scums from the residual gas in the vacuum chamber.

Only materials with a much-elevated vapour pressure than the heating element can be deposited without the contagion of the film. Molecular beam epitaxy is a particularly high-level form of thermal evaporation.

An electron beam evaporator bonfires a high-energy beam from an electron beam gun to boil a small spot of matter; since the heating is not uniform, lower vapour pressure materials can be deposited. The beam is customarily bent through an angle of 270° in order to confirm that the gun filament is not openly exposed to the evaporant flux—typical deposition rates for electron beam vaporisation array from 1 to 10 nanometers per second.

Electron Beam Evaporator. Thanks, https://www.researchgate.net/profile/Nilofar-Asim

Sputtering relies on a plasma – usually a noble gas, such as argon to knock material from an ” aim at ” a few atoms at a period. A process used to deposit thin films of an object onto a substrate. By initiating a gaseous plasma and then quickening the ions from this plasma into some source material (target), the source material is eroded by the arriving ions via energy transfer. It is ejected in the form of neutral particles – either single atoms or clusters of molecules.

As these neutral particles are ejected, they will travel in a straight line unless they come into contact with other particles or a near-surface. The target kept at a comparatively low temperature. Subsequently, the process is not one of evaporation, assembling this as one of the most malleable deposition procedures. It is especially useful for compounds, where different constituents would otherwise tend to evaporate at different rates.

Sputtering
Sputtering

Pulsed laser deposition techniques work by an ablation process. Pulses of focused laser light turn vapour the surface of the target substance and convert it to plasma; this plasma usually regresses to a gas before it reaches the substrate.

Pulse Laser Deposition. Thanks, https://groups.ist.utl.pt/

Cathodic arc deposition (arc-PVD) is a type of ion beam deposition where an electrical arc is created that blasts ions from the cathode. The arc has a remarkably high power density resulting in a high level of ionisation (30-100%), multiply charged ions, neutral particles, clusters, and droplets. If a reactive gas is commenced during the evaporation process, dissociation, ionisation, and excitation can occur during contact with the ion flux and a compound film will be deposited.

Cathodic Arc Deposition. Thanks, https://www.advancedenergyblog.com/

Now that you have read about Thin Film Deposition. Check out our course on Ampersand Academy Read this interesting article about Sol Gel Technique.

Sol Gel Technique

Easy read on Sol gel Technique in 2021

Know in detail about the technique sol gel technique and its advantages

Sol gel method is a type of chemical solution deposition technique or a wet chemical route for the synthesis of colloidal dispersions of oxides – altered to powders, fibres, thin-film sand monoliths. Sol-gel coating is a process of preparation of single or multi-component oxide coating which may be glass, ceramic or crystalline ceramic depending on the process. Also, the nanomaterials used in modern ceramic and device technology require high purity and facilitate control over composition and structure.

The sol-gel method used recently in the fields of materials science and ceramic engineering and these methods is used primarily for the fabrication of materials (typically a metal oxide) starting from a chemical solution which acts as the precursor for an integrated network (or gel) of either discrete particles or network polymers. Typical precursors are metal alkoxides and metal chlorides, which undergo various forms of hydrolysis and polycondensation reactions.

The formation of a metal oxide involves connecting the metal centres with oxo (M-O-M) or hydroxo (M-OH-M) bridges, therefore generating metal-oxo or metal-hydroxo polymers in solution. Thus, the sol evolves towards the formation of a gel-like diphasic system containing both a liquid phase and solid phase whose morphologies range from discrete particles to continuous polymer networks.

Sol Gel Technique
Sol Gel Technique. Thanks, Wikipedia

The precursor sol can be either deposited on a substrate to form a film, cast into a suitable container with the desired shape, or used to synthesize powders. The sol-gel approach is a cheap and low-temperature technique that allows for excellent control of the product’s chemical composition. Even small quantities of dopants, such as organic dyes and rare earth elements, can be introduced in the sol and end up uniformly dispersed in the final product.

It is used in ceramic processing and manufacturing as an investment casting material, or as a means of producing very thin films of metal oxides for various purposes. Sol-gel derived materials have diverse applications in optics, electronics, energy, space, biosensors, medicine, reactive material and separation technology.

Advantages of Sol-gel Technique

The sol-gel coating is one of the interesting methods because it has many advantages. Examples are as the followings

  • The chemical reactants for the sol-gel process can be conveniently purified by distillation and crystallization.
  • All starting materials are mixed at the molecular level in the solution so that a  high degree of homogeneity of films can be expected.
  • The trace elements in the form of organometallic compounds or soluble organic or inorganic salts can be added to adjust the microstructure or to improve the structural, optical, and electrical properties of oxide films.
  • The viscosity, surface tension and concentration of a solution can be easily adjusted.
  • Large area films of desired composition and thickness can be formed on a complex geometry substrate.
  • It facilitates to the formation of films of complex oxides and eases to control the composition and microstructure of the deposited films.
  • Sol-gel coating is exclusively applied for the fabrication of transparent layers with a high degree of planarity and surface quality.

Now that you have read about Sol Gel Technique. Check out our course on Ampersand Academy Read this interesting article about Thin Film Deposition.