HomeNewsNew computer vision method accelerates inspection of electronic materials

New computer vision method accelerates inspection of electronic materials

To improve the performance of solar cells, transistors, LEDs and batteries, higher electronic materials constructed from novel compositions which have yet to be discovered are needed.

To speed up the seek for advanced functional materials, scientists are using AI tools to discover promising materials from a whole lot of tens of millions of chemical formulations. In parallel, engineers are constructing machines that may print a whole lot of fabric samples concurrently based on chemical compositions tagged by AI search algorithms.

But until now, there was no similarly rapid method to substantiate that these printed materials actually perform as expected. This final step of fabric characterization has been a serious bottleneck in testing advanced materials.

A brand new computer vision technique developed by MIT engineers is now significantly speeding up the characterization of newly synthesized electronic materials. The technique robotically analyzes images of printed semiconductor samples and quickly estimates two necessary electronic properties for every sample: band gap (a measure of the activation energy of electrons) and stability (a measure of lifetime).

The latest technique allows electronic material to be characterised 85 times faster than the traditional benchmark method.

The researchers hope to make use of the strategy to hurry up the seek for promising solar cell materials. They also plan to integrate the strategy into a totally automated material screening system.

“Ultimately, we are able to imagine integrating this method into an autonomous laboratory of the longer term,” says MIT student Eunice Aissi. “The entire system would allow us to provide a pc a materials problem, have it predict possible compounds, after which produce and characterize those predicted materials across the clock until it finds the specified solution.”

“The applications of those techniques range from improving solar energy to transparent electronics and transistors,” adds MIT graduate Alexander (Aleks) Siemenn. “It really covers the complete spectrum of how semiconductor materials can profit society.”

Aissi and Siemenn describe the brand new technology intimately in a Study published today in . Her co-authors at MIT include graduate student Fang Sheng, postdoc Basita Das, and professor of mechanical engineering Tonio Buonassisi, in addition to former visiting professor Hamide Kavak of Cukurova University and visiting postdoc Armi Tiihonen of Aalto University.

Performance in optics

Once a brand new electronic material is synthesized, characterization of its properties is usually performed by a “material expert” who examines one sample at a time using a benchtop device called UV-Vis, which shines through different colours of sunshine to find out where the semiconductor begins to soak up more strongly. This manual process is precise but in addition time-consuming: a topic expert typically characterizes about 20 material samples per hour – a snail's pace in comparison with some printing tools that may produce 10,000 different material mixtures per hour.

“The manual characterization process could be very slow,” says Buonassisi. “You can depend on the measurements, nevertheless it just isn’t attuned to the speed at which you’ll be able to apply material to a substrate today.”

To speed up the characterization process and eliminate one in every of the most important bottlenecks in materials testing, Buonassisi and his colleagues turned to computer vision – a field that uses computer algorithms to quickly and robotically analyze optical features in a picture.

“Optical characterization techniques are very powerful,” notes Buonassisi. “You can get information in a short time. The images are very large, spanning many pixels and wavelengths, and a human simply cannot process them, but a machine learning program on a pc can.”

The team realized that certain electronic properties – namely band gap and stability – might be estimated based on visual information alone if this information was captured in sufficient detail and interpreted appropriately.

With this goal in mind, the researchers developed two latest computer vision algorithms to robotically interpret images of electronic materials: one to estimate the band gap and one other to find out stability.

The first algorithm is used to process visual data from highly detailed hyperspectral images.

“Instead of a regular camera image with three channels – red, green and blue (RBG) – the hyperspectral image has 300 channels,” explains Siemenn. “The algorithm takes this data, transforms it and calculates a band gap. We perform this process extremely quickly.”

The second algorithm analyzes standard RGB images and assesses the steadiness of a cloth based on optical changes in the fabric color over time.

“We found that color changes will be a very good indicator of the degradation rate in the fabric system we studied,” says Aissi.

Material compositions

The team applied the 2 latest algorithms to characterize the band gap and stability for about 70 printed semiconductor samples. They used a robotic printer to deposit the samples on a single slide, like cookies on a baking sheet. Each job was done with a rather different combination of semiconductor materials. In this case, the team printed perovskites in numerous ratios – a kind of material that is taken into account a promising candidate for solar cells but can be known to decay quickly.

“You try to vary the composition – add slightly little bit of this and slightly little bit of that – to make the perovskites more stable and perform higher,” says Buonassisi.

After printing 70 different compositions of perovskite samples on a single slide, the team scanned the slide with a hyperspectral camera. They then applied an algorithm that visually “segments” the image and robotically isolates the samples from the background. They ran the brand new bandgap algorithm on the isolated samples and robotically calculated the bandgap for every sample. The entire bandgap extraction process took about six minutes.

“Normally, a topic expert would wish several days to manually characterize the identical variety of samples,” says Siemenn.

To test stability, the team placed the identical slide in a chamber where they varied environmental conditions resembling humidity, temperature and exposure to light. They used a regular RGB camera to capture a picture of the samples every 30 seconds over two hours. They then applied the second algorithm to the photographs of every sample over time to estimate the extent to which each droplet modified color or disintegrated under different environmental conditions. In the tip, the algorithm provided a “stability index,” or a measure of the sturdiness of every sample.

As a control, the team compared its results with manual measurements of the identical droplets performed by a website expert. Compared to the expert's benchmark estimates, the team's band gap and stability results were 98.5 percent and 96.9 percent more accurate, respectively, and 85 times faster.

“We were continually surprised by how these algorithms couldn’t only increase characterization speed but in addition produce precise results,” says Siemenn. “We can imagine this fitting into the present automated materials pipeline we’re developing within the lab, so we are able to run it completely automated, using machine learning to find out where we wish to find these latest materials, print them after which actually characterize them – all with very fast processing.”

This work was supported partially by First Solar.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read