Tag Archives: BrainChip

Neuromorphic (brainlike) computing and your car (a Mercedes Benz Vision AVTR concept car)

If you’ve ever fantasized about a batmobile of your own, the dream could come true soon,

Mercedes Berz VISION AVTR [downloaded from https://www.mercedes-benz.com/en/innovation/concept-cars/vision-avtr/]

It was the mention of neuromorphic computing in a television ad sometime in September 2022 that sent me on a mission to find out what Mercedes Benz means when they use neuromorphic computing to describe a feature found in their Vision AVTR concept car. First, a little bit about the car (from the Vision AVTR webpage accessed in October 2022),

VISION AVTR – inspired by AVATAR.

The name of the groundbreaking concept vehicle stands not only for the close collaboration in developing the show car together with the AVATAR team but also for ADVANCED VEHICLE TRANSFORMATION. This concept vehicle embodies the vision of Mercedes-Benz designers, engineers and trend researchers for mobility in the distant future.

,,,

Organic battery technology.

The VISION AVTR was designed in line with its innovative electric drive. This is based on a particularly powerful and compact high-voltage battery. For the first time, the revolutionary battery technology is based on graphene-based [emphasis mine] organic cell chemistry and thus completely eliminates rare, toxic and expensive earths such as metals. Electromobility thus becomes independent of fossil resources. An absolute revolution is also the recyclability by composting, which is 100% recyclable due to the materiality. As a result, Mercedes-Benz underlines the high relevance of a future circular economy in the raw materials sector.

Masterpiece of efficiency.

At Mercedes-Benz, the consideration of efficiency goes far beyond the drive concept, because with increasing digitalisation, the performance of the large number of so-called secondary consumers also comes into focus – along with their efficient energy supply, without negatively affecting the drive power of the vehicle itself. Energy consumption per computing operation is already a key target in the development of new computer chips. This trend will continue in the coming years with the growth of sensors and artificial intelligence in the automotive industry. The neuro-inspired approach of the VISION AVTR, including so-called neuromorphic hardware, promises to minimise the energy requirements of sensors, chips and other components to a few watts. [emphasis mine] Their energy supply is provided by the cached current of the integrated solar plates on the back of the VISION AVTR. The 33 multi-directionally movable surface elements act as “bionic flaps”.

Interior and exterior merge.

For the first time, Mercedes-Benz has worked with a completely new design approach in the design of the VISION AVTR. The holistic concept combines the design disciplines interior, exterior and UX [user experience] from the first sketch. Man and human perception are the starting point of a design process from the inside out. The design process begins with the experience of the passengers and consciously focuses on the perception and needs of the passengers. The goal was to create a car that prolongs the perception of its passengers. It was also a matter of creating an immersive experience space in which passengers connect with each other, with the vehicle and the surrounding area [emphasis mine ] in a unique way.

Intuitive control.

The VISION AVTR already responds to the approach of the passengers by visualising the energy and information flow of the environment with digital neurons that flow through the grille through the wheels to the rear area. The first interaction in the interior between man and vehicle happens completely intuitively via the control unit: by placing the hand on the centre console, the interior comes to life and the vehicle recognises the driver by his breathing. This is made visible on the instrument panel and on the user’s hand. The VISION AVTR thus establishes a biometric connection with the driver [emphasis mine] and increases his awareness of the environment. The digital neurons flow from the interior into the exterior and visualise the flow of energy and information. For example, when driving, the neurons flow over the outside of the vehicle. [emphasis mine] When changing direction, the energy flows to the corresponding side of the vehicle.

The vehicle as an immersive experience space.

The visual connection between passengers and the outside world is created by the curved display module, which replaces a conventional dashboard. The outside world around the vehicle and the surrounding area is shown in real-time 3D graphics and at the same time shows what is happening on the road in front of the vehicle. Combined with energy lines, these detailed real-time images bring the interior to life and allow passengers to discover and interact with the environment in a natural way with different views of the outside world. Three wonders of nature – the Huangshan Mountains of China, the 115-metre-high Hyperion Tree found in the United States and the pink salt Lake Hillier from Australia – can be explored in detail. Passengers become aware of various forces of nature that are not normally visible to the human eye, such as magnetic fields, bioenergy or ultraviolet light.

The curved display module in the Mercedes-Benz VISION AVTR – inspired by AVATAR
[downloaded from https://www.mercedes-benz.com/en/innovation/concept-cars/vision-avtr/]

Bionic formal language.

When the boundaries between vehicle and living beings are lifted, Mercedes-Benz combines luxury and sustainability and works to make the vehicles as resource-saving as possible. With the VISION AVTR, the brand is now showing how a vehicle can blend harmoniously into its environment and communicate with it. In the ecosystem of the future, the ultimate luxury is the fusion of human and nature with the help of technology. The VISION AVTR is thus an example of sustainable luxury in the field of design. As soon as you get in, the car becomes an extension of your own body and a tool to discover the environment much as in the film humans can use avatars to extend and expand their abilities.

A few thoughts

The movie, Avatar, was released in 2009 and recently rereleased in movie houses in anticipation of the sequel, Avatar: The Way of Water to be released in December 2022 (Avatar [2009 film] Wikipedia entry). The timing, Avatar and AVTR, is interesting, oui?

Moving onto ‘organic’, which means carbon-based in this instance and, specifically, graphene. Commercialization of graphene is likely top-of-mind for the folks (European Commission) who bet 1B Euros in 2013 with European Union money to fund the Graphene Flagship project. This battery from German company Mercedes Benz must be exciting news for the funders and for people who want to lessen dependency on rare earths. Your battery can be composted safely (according to the advertising).

The other piece of good news, is the neuromorphic computing,

“The neuro-inspired approach of the VISION AVTR, including so-called neuromorphic hardware, promises to minimise the energy requirements of sensors, chips and other components to a few watts.”

On the other hand and keeping in mind the image above (a hand with what looks like an embedded object), it seems a little disconcerting to merge with one’s car, “… passengers connect with each other, with the vehicle and the surrounding area …” which becomes even more disconcerting when this appears in the advertising,

… VISION AVTR thus establishes a biometric connection with the driver … The digital neurons flow from the interior into the exterior and visualise the flow of energy and information. For example, when driving, the neurons flow over the outside of the vehicle.

Are these ‘digital neurons’ flowing around the car like a water current? Also, the car is visualizing? Hmm …

I did manage to find a bit more information about neuromorphic computing although it’s for a different Mercedes Benz concept car (there’s no mention of flowing digital neurons) in a January 18, 2022 article by Sally Ward-Foxton for EE Times (Note: A link has been removed),

The Mercedes Vision EQXX concept car, promoted as “the most efficient Mercedes-Benz ever built,” incorporates neuromorphic computing to help reduce power consumption and extend vehicle range. To that end, BrainChip’s Akida neuromorphic chip enables in-cabin keyword spotting as a more power-efficient way than existing AI-based keyword detection systems.

“Working with California-based artificial intelligence experts BrainChip, Mercedes-Benz engineers developed systems based on BrainChip’s Akida hardware and software,” Mercedes noted in a statement describing the Vision EQXX. “The example in the Vision EQXX is the “Hey Mercedes” hot-word detection. Structured along neuromorphic principles, it is five to ten times more efficient than conventional voice control,” the carmaker claimed.

That represents validation of BrainChip’s technology by one of its early-access customers. BrainChip’s Akida chip accelerates spiking neural networks (SNNs) and convolutional neural networks (via conversion to SNNs). It is not limited to a particular application, and also run [sic] person detection, voice or face recognition SNNs, for example, that Mercedes could also explore.

This January 6, 2022 article by Nitin Dahad for embedded.com describes what were then the latest software innovations in the automotive industry and segues into a description of spiking neural networks (Note: A link has been removed),

The electric vehicle (EV) has clearly become a key topic of discussion, with EV range probably the thing most consumers are probably worried about. To address the range concern, two stories emerged this week – one was Mercedes-Benz’ achieving a 1,000 km range with its VISION EQXX prototype, albeit as a concept car, and General Motors announcing during a CES [Consumer Electronics Show] 2022 keynote its new Chevrolet Silverado EV with 400-mile (640 km) range.

In briefings with companies, I often hear them talk about the software-defined car and the extensive use of software simulation (or we could also call it a digital twin). In the case of both the VISION EQXX and the Silverado EV, software plays a key part. I also spoke to BlackBerry about its IVY platform and how it is laying the groundwork for software-defined vehicles.

Neuromorphic computing for infotainment

This efficiency is not just being applied to enhancing range though. Mercedes-Benz also points out that its infotainment system uses neuromorphic computing to enable the car to take to “take its cue from the way nature thinks”.

Mercedes-Benz VISION EQXXMercedes-Benz VISION EQXX

The hardware runs spiking neural networks, in which data is coded in discrete spikes and energy only consumed when a spike occurs, reducing energy consumption by orders of magnitude. In order to deliver this, the carmaker worked with BrainChip, developing the systems based on its Akida processor. In the VISION EQXX, this technology enables the “Hey Mercedes” hot-word detection five to ten times more efficiently than conventional voice control. Mercedes-Benz said although neuromorphic computing is still in its infancy, systems like these will be available on the market in just a few years. When applied on scale throughout a vehicle, they have the potential to radically reduce the energy needed to run the latest AI technologies.

For anyone curious about BrainChip, you can find out more here.

It took a little longer than I hoped but I’m glad that I found out a little more about neuromorphic computing and one application in the automotive industry.

Is it time to invest in a ‘brain chip’ company?

This story take a few twists and turns. First, ‘brain chips’ as they’re sometimes called would allow, theoretically, computers to learn and function like human brains. (Note: There’s another type of ‘brain chip’ which could be implanted in human brains to help deal with diseases such as Parkinson’s and Alzheimer’s. *Today’s [June 26, 2015] earlier posting about an artificial neuron points at some of the work being done in this areas.*)

Returning to the ‘brain ship’ at hand. Second, there’s a company called BrainChip, which has one patent and another pending for, yes, a ‘brain chip’.

The company, BrainChip, founded in Australia and now headquartered in California’s Silicon Valley, recently sparked some investor interest in Australia. From an April 7, 2015 article by Timna Jacks for the Australian Financial Review,

Former mining stock Aziana Limited has whet Australian investors’ appetite for science fiction, with its share price jumping 125 per cent since it announced it was acquiring a US-based tech company called BrainChip, which promises artificial intelligence through a microchip that replicates the neural system of the human brain.

Shares in the company closed at 9¢ before the Easter long weekend, having been priced at just 4¢ when the backdoor listing of BrainChip was announced to the market on March 18.

Creator of the patented digital chip, Peter Van Der Made told The Australian Financial Review the technology has the capacity to learn autonomously, due to its composition of 10,000 biomimic neurons, which, through a process known as synaptic time-dependent plasticity, can form memories and associations in the same way as a biological brain. He said it works 5000 times faster and uses a thousandth of the power of the fastest computers available today.

Mr Van Der Made is inviting technology partners to license the technology for their own chips and products, and is donating the technology to university laboratories in the US for research.

The Netherlands-born Australian, now based in southern California, was inspired to create the brain-like chip in 2004, after working at the IBM Internet Security Systems for two years, where he was chief scientist for behaviour analysis security systems. …

A June 23, 2015 article by Tony Malkovic on phys.org provide a few more details about BrainChip and about the deal,

Mr Van der Made and the company, also called BrainChip, are now based in Silicon Valley in California and he returned to Perth last month as part of the company’s recent merger and listing on the Australian Stock Exchange.

He says BrainChip has the ability to learn autonomously, evolve and associate information and respond to stimuli like a brain.

Mr Van der Made says the company’s chip technology is more than 5,000 faster than other technologies, yet uses only 1/1,000th of the power.

“It’s a hardware only solution, there is no software to slow things down,” he says.

“It doesn’t executes instructions, it learns and supplies what it has learnt to new information.

“BrainChip is on the road to position itself at the forefront of artificial intelligence,” he says.

“We have a clear advantage, at least 10 years, over anybody else in the market, that includes IBM.”

BrainChip is aiming at the global semiconductor market involving almost anything that involves a microprocessor.

You can find out more about the company, BrainChip here. The site does have a little more information about the technology,

Spiking Neuron Adaptive Processor (SNAP)

BrainChip’s inventor, Peter van der Made, has created an exciting new Spiking Neural Networking technology that has the ability to learn autonomously, evolve and associate information just like the human brain. The technology is developed as a digital design containing a configurable “sea of biomimic neurons’.

The technology is fast, completely digital, and consumes very low power, making it feasible to integrate large networks into portable battery-operated products, something that has never been possible before.

BrainChip neurons autonomously learn through a process known as STDP (Synaptic Time Dependent Plasticity). BrainChip’s fully digital neurons process input spikes directly in hardware. Sensory neurons convert physical stimuli into spikes. Learning occurs when the input is intense, or repeating through feedback and this is directly correlated to the way the brain learns.

Computing Artificial Neural Networks (ANNs)

The brain consists of specialized nerve cells that communicate with one another. Each such nerve cell is called a Neuron,. The inputs are memory nodes called synapses. When the neuron associates information, it produces a ‘spike’ or a ‘spike train’. Each spike is a pulse that triggers a value in the next synapse. Synapses store values, similar to the way a computer stores numbers. In combination, these values determine the function of the neural network. Synapses acquire values through learning.

In Artificial Neural Networks (ANNs) this complex function is generally simplified to a static summation and compare function, which severely limits computational power. BrainChip has redefined how neural networks work, replicating the behaviour of the brain. BrainChip’s artificial neurons are completely digital, biologically realistic resulting in increased computational power, high speed and extremely low power consumption.

The Problem with Artificial Neural Networks

Standard ANNs, running on computer hardware are processed sequentially; the processor runs a program that defines the neural network. This consumes considerable time and because these neurons are processed sequentially, all this delayed time adds up resulting in a significant linear decline in network performance with size.

BrainChip neurons are all mapped in parallel. Therefore the performance of the network is not dependent on the size of the network providing a clear speed advantage. So because there is no decline in performance with network size, learning also takes place in parallel within each synapse, making STDP learning very fast.

A hardware solution

BrainChip’s digital neural technology is the only custom hardware solution that is capable of STDP learning. The hardware requires no coding and has no software as it evolves learning through experience and user direction.

The BrainChip neuron is unique in that it is completely digital, behaves asynchronously like an analog neuron, and has a higher level of biological realism. It is more sophisticated than software neural models and is many orders of magnitude faster. The BrainChip neuron consists entirely of binary logic gates with no traditional CPU core. Hence, there are no ‘programming’ steps. Learning and training takes the place of programming and coding. Like of a child learning a task for the first time.

Software ‘neurons’, to compromise for limited processing power, are simplified to a point where they do not resemble any of the features of a biological neuron. This is due to the sequential nature of computers, whereby all data has to pass through a central processor in chunks of 16, 32 or 64 bits. In contrast, the brain’s network is parallel and processes the equivalent of millions of data bits simultaneously.

A significantly faster technology

Performing emulation in digital hardware has distinct advantages over software. As software is processed sequentially, one instruction at a time, Software Neural Networks perform slower with increasing size. Parallel hardware does not have this problem and maintains the same speed no matter how large the network is. Another advantage of hardware is that it is more power efficient by several orders of magnitude.

The speed of the BrainChip device is unparalleled in the industry.

For large neural networks a GPU (Graphics Processing Unit) is ~70 times faster than the Intel i7 executing a similar size neural network. The BrainChip neural network is faster still and takes far fewer CPU (Central Processing Unit) cycles, with just a little communication overhead, which means that the CPU is available for other tasks. The BrainChip network also responds much faster than a software network accelerating the performance of the entire system.

The BrainChip network is completely parallel, with no sequential dependencies. This means that the network does not slow down with increasing size.

Endorsed by the neuroscience community

A number of the world’s pre-eminent neuroscientists have endorsed the technology and are agreeing to joint develop projects.

BrainChip has the potential to become the de facto standard for all autonomous learning technology and computer products.

Patented

BrainChip’s autonomous learning technology patent was granted on the 21st September 2008 (Patent number US 8,250,011 “Autonomous learning dynamic artificial neural computing device and brain inspired system”). BrainChip is the only company in the world to have achieved autonomous learning in a network of Digital Neurons without any software.

A prototype Spiking Neuron Adaptive Processor was designed as a ‘proof of concept’ chip.

The first tests were completed at the end of 2007 and this design was used as the foundation for the US patent application which was filed in 2008. BrainChip has also applied for a continuation-in-part patent filed in 2012, the “Method and System for creating Dynamic Neural Function Libraries”, US Patent Application 13/461,800 which is pending.

Van der Made doesn’t seem to have published any papers on this work and the description of the technology provided on the website is frustratingly vague. There are many acronyms for processes but no mention of what this hardware might be. For example, is it based on a memristor or some kind of atomic ionic switch or something else altogether?

It would be interesting to find out more but, presumably, van der Made, wishes to withhold details. There are many companies following the same strategy while pursuing what they view as a business advantage.

* Artificial neuron link added June 26, 2015 at 1017 hours PST.