Achieve better performance of brains and computers

The new generation of integrated circuits is capable of processing calculations at blazing speed, but this performance can come at a cost.

Each push of an electron to move 1s and 0s creates a thermal imprint. If the path of these moving electrons begins to concentrate in one part of the chip, that hot spot can cause a variety of undesirable effects, ranging from slow performance to reduced chip life.

Finding ways to detect these hot spots and cool them down by adjusting the chip design or the algorithms running through it could improve chip performance.

We humans have a similar problem. Learning a new skill like playing the cello or computer programming can be fun, but it’s also inevitable that we stumble along the way, as information may arrive too quickly or we may not have enough time to assimilate what we learned.

Cognitive overload can appear as hot spots in the brain, marked by changes in blood flow and oxygenation. If there was a way to detect this overload in real time and adjust the task to be more manageable, we could work more efficiently and learn faster.

In recently published research, Tufts Faculty of Engineering highlights two very different but analogous methods for detecting hot spots in electronic circuits to lighten the computational load and improve the efficiency of brain computation and cognition.

Wet Software Optimization

In the Tufts Human-Computer Interaction lab, a student is tasked with learning a Bach piano chorale for the first time. She puts on a fabric headband that presses glowing optical fibers against her forehead, and although she’s a beginner, she’s steadily guided through each step of the lesson at a pace that follows her natural ability to learn and retain information. .

She finishes the lesson in a short amount of time, shorter than expected. But that comes as no surprise to the researchers who designed the headband she wears. He read his brain activity to automatically adapt the lesson to his abilities.

In another experiment, an air traffic controller is given a simulation of a task he has performed countless times: managing multiple inbound and outbound flights. The task, however, still causes stress and potential mistakes when the pace and complexity of the decisions needed come too quickly.

But this time, as he manages the tasks, some of the traffic can be automatically offloaded to a colleague who has the capacity to do more, or if his workload is low, he can see himself allocate additional traffic, to navigate between boredom and overload. Again, the headband reads its cognitive load and adjusts the workflow.

The brain scanning device was created by Robert Jacob and Sergio Fantini, professors of computer science and biomedical engineering at Tufts. It is based on a technology called functional near-infrared spectroscopy (fNIRS). Near-infrared light – a little higher in frequency than direct infrared – passes through optical fibers in the headband to penetrate harmlessly through the skull to reach the cerebral cortex.

This is where much of the cognitive processing – or thinking – takes place. Watching the red infrared glow that radiates back provides information about the amount of blood flow, and therefore brain activity, occurring in the tissues below.

The method can potentially provide a map of blood flow activity in the brain with a resolution of around one centimeter. Of course, other imaging methods such as functional MRI offer much higher resolution – around half a millimeter – but infrared light does not require extensive equipment or immobilization of the person, and the readings can be taken in real time and processed on a laptop computer. or phone.

This trade-off means that there are applications available for fNIRS that cannot be achieved with other methods – it can provide information about brain activity while the subject is at home or in the office.

“We’re developing what’s called an implicit user interface,” says Jacob. “We want to get information from you, from your body’s response, without you pressing a button or turning a knob, to help a computer or device respond so you can interact more effectively with it. him.”

Another potential application would be as a diagnostic tool for attention deficit disorder (ADD), Fantini says. People with these conditions tend to self-regulate their cognitive overload by switching from a task that requires intense concentration to other tasks or distractions that are not as demanding on their concentration.

Examining patterns of cognitive load and seeing a high frequency shift from higher cognitive effort to lower cognitive effort, or seeing a low threshold from high cognitive loads when they are faced with difficult tasks could indicate ADD.

Improving the accuracy of the measurements will be key to making it a reliable tool for other researchers or even a consumer market. Consistent measurements took an hour to collect results from known high and low workloads to calibrate the instrument first, before starting measurement of a specific new task.

In their most recent publications (PDF here and here), the researchers collaborated with Michael Hughes, assistant professor of computer science, to develop machine learning methods that can pre-train the brain workload detector on dozens of past topics. This makes grading on a new subject much faster. They released the pre-training set publicly, so it will be easier for others to set up their own workload detectors.

“We advocate handheld, portable and cordless instruments,” says Fantini. “The first wireless or smartphone-linked prototypes have been developed. Ideas of including these devices in goggles or hats have also been explored, but that’s still something for the future.

Hardware Optimization

Hot spots on computer microprocessors also indicate an overload of activity that can lead to inefficiencies and even errors. In a recent study, Mark Hempstead, associate professor of electrical and computer engineering, describes a new computer model called HotGauge, which simulates the computing activity of a CPU chip and the effect of workloads that generate heat in the whole chip.

Simulation can help find ways to redesign chips, or the programs running on them, to minimize overheating.

Since the mid-1970s, improvements in microchip capabilities have been driven by increased transistor density, allowing for higher operating rates. They were able to maintain the same density of energy use, and therefore heat generated, over their entire surface. This was known as Dennard scaling.

But that era is over. More recently, power density has increased exponentially with each generation of chip design. To make matters worse, the trend is to cram more functionality into ever smaller chips that fit into everything from supercomputers to laptops to phones.

The hotspot problem has become a significant challenge for the design of each new generation of central processing units (CPUs). Hotspots not only get hotter, they also appear and disappear in less than 200 microseconds, still long enough to compromise efficiency and degrade chip life expectancy.

For a long time, temperature regulation has been handled using physical techniques – heat sinks, fans, and, in large-scale or compute-intensive applications, liquid cooling. There is also power blocking, which shuts down parts that are not in use and dynamically limits the current voltage and frequency to save energy. These approaches are no longer sufficient.

What’s needed today, Hempstead says, is a more active role in designing chip and application architecture to minimize the frequency and severity of hot spots. To do this, you must first quantify the hotspot effects of different architecture and application designs. This is where HotGauge comes in.

HotGauge, created by Hempstead, allows you to create a simulated virtual chip running a simulated application to show the likely heat signatures that will result.

In the recent study, Hempstead demonstrated that HotGauge can identify trouble spots and allow the chip or application design to be tuned to help the chip run cooler, or at least avoid local peak regions of temperature – proof in principle that it can make a valuable contribution to the design of future efficient processors.

Mike Silver can be reached at [email protected].